Giter Site home page Giter Site logo

Comments (13)

FranckPachot avatar FranckPachot commented on May 27, 2024 1

ok, so it seems to be a different problem. Will check with @droberts-yb if we can reproduce it

from yugabyte-db.

FranckPachot avatar FranckPachot commented on May 27, 2024

Hi

due to the scope of the transaction, at some point Yugabyte needs to have a full copy of the data in memory

It should not. In the SQL layer the memory used is per batch of rows. In the storage layer, it is per RocksDB memtable.

consumes excessive amount of memory,

what is the metric for the graph you show? Can you check mem_tracker_server_Tablets_overhead_PerTablet (for DocDB layer) and allocated_mem_bytes from pg_stat_activity (for the SQL layer)

(e.g. 300mb with around 310 million rows)

What is the size of one row?
Can you share a small sample of the measurement files, to test it.

from yugabyte-db.

treibholzhq avatar treibholzhq commented on May 27, 2024

Hi @FranckPachot, thank you very much for your answer and your clarification regarding memory usage.

The graph shows the memory consumption metrics on OpenShift for the yb-tservers stateful set, showing yb-tserver-0 pod consuming 25 gigabyte of memory when ingesting a 300 megabyte file (310 million rows).

allocated_mem_bytes is just rising and rising:
image

I tried to get the mem_tracker_server_Tablets_overhead_PerTablet from the Web UI (Metrics), unfortunately I cannot see yb-tserver-0 reporting it (I can see the other 2 tservers reporting it though).

The size of one row is quite small, it contains just a uuid, a signal name (mostly < 256 chars), a timestamp and a 64bit floating point number.

Example:

8884dc5b-1f08-47e1-8d2c-8e8b8f716f71 m_torque 2023-07-01 12:01:56.874 +0200 917.0

Thats 65 bytes.

Oddly enough, I don't get any error in the YugabyteDB TServer logs, the connection to the pod just terminates:

psycopg2.OperationalError: server closed the connection unexpectedly
	This probably means the server terminated abnormally
	before or while processing the request.

It's very much like the issue described here: #5453

Thank you very much for your help.

from yugabyte-db.

FranckPachot avatar FranckPachot commented on May 27, 2024

I tried some variations to get something similar to what you get (high pg_stat_activity.allocated_mem_bytes) I got this when explicitly disabling intermediate commits with set yb_default_copy_from_rows_per_transaction=0`:

image

and it finally failed with ERROR: temporary file size exceeds temp_file_limit
This doesn't happen when the foreign key is removed.

I'll open an issue. For the moment, the workarounds are intermediate commit (but from what you say you want the whole to be transactional) or without the foreign key (there's a DISABLE_FK_CHECK in COPY but I guess you have to use copy_expert rather than copy_from)

from yugabyte-db.

treibholzhq avatar treibholzhq commented on May 27, 2024

@FranckPachot thanks for the investigation.

I have rewritten my script to use psycopg3 which is much simpler to use, so I tried using COPY signals FROM STDIN WITH (DISABLE_FK_CHECK), but this fails with the same behaviour.

I then removed the foreign key entirely and created the signals table without it. No luck. Still consuming way too much memory.

Btw: select * from pg_settings where name = 'yb_default_copy_from_rows_per_transaction'; shows 20000, which is the default I guess.

from yugabyte-db.

treibholzhq avatar treibholzhq commented on May 27, 2024

Side note: I have been using a connection set to autocommit this morning and can confirm that memory consumption stays flat in this case.

So I guess it is definitely an issue with transaction size?

from yugabyte-db.

FranckPachot avatar FranckPachot commented on May 27, 2024

OK, my first thought was that yb_default_copy_from_rows_per_transaction was disabled with a warning when not in autocommit but I tested it quickly and didn't confirm it. My test was probably wrong. We need to reproduce it to understand what is allocated (foreign key was a possibility but maybe something else). Usually bulk loads are done with intermediate commits.

from yugabyte-db.

treibholzhq avatar treibholzhq commented on May 27, 2024

Yeah, the challenge here is that our bulk belongs to an entity and that entity can hold a lot of data (100 gigabytes in the worst case), so the desired behaviour is "all of entity" or "nothing".

from yugabyte-db.

treibholzhq avatar treibholzhq commented on May 27, 2024

@FranckPachot @droberts-yb in the meantime I tried a different approach in that I am using COPY to ingest to a temporary table and then doing an INSERT INTO SELECT from this temp table to the real table, which gives me transactional behavior when the connection fails (and also helps with parallel ingests).
First, I had to increase the maximum temp file limit (it was set to 1GB, I just set it to 100GB, just to be safe).
Then, I tried to ingest 2 files (same as above) in parallel, they failed with a Disk Full error (I have 50GB disks now).
I created the temp table without foreign keys using CREATE TEMP TABLE signals_temp (LIKE signals INCLUDING DEFAULTS), so I am not sure if that is the same issue you mentioned above, @FranckPachot.

from yugabyte-db.

FranckPachot avatar FranckPachot commented on May 27, 2024

Thanks for testing. I'm not surprised that it is the same from a temporary table but it helps to understand what happens. What is the size of file (you said 300mb, right?), and the size of the temporary table? (select pg_table_size('signals_temp');)

from yugabyte-db.

treibholzhq avatar treibholzhq commented on May 27, 2024

Yes, it's the same file (300mb, 313 million rows). Size of the signals_temp table is:

select pg_size_pretty(pg_table_size('signals_temp'));
31 GB

from yugabyte-db.

FranckPachot avatar FranckPachot commented on May 27, 2024

313 million rows of 65 bytes are 19GB, so 31GB seems large, but it's okay. What format is the file to hold that in 300MB?

So, yes, that seems too much to be loaded in one transaction. Maybe you can load it non-transactionally and have a view to make it visible at the end. like this: https://dev.to/yugabyte/yugabytedb-transactional-load-with-non-transactional-copy-10ac

from yugabyte-db.

treibholzhq avatar treibholzhq commented on May 27, 2024

The file is .mf4 (https://www.asam.net/standards/detail/mdf/). A compact binary format for measurement data.

I believe I am already loading non-transactionally, I am setting autocommit=True on the psycopg connection. So the COPY to the temp table already is non-transactional, as in each row is committed immediately. So yes, I realize that the temp table should be 19gb, but I do not understand where the overhead of 12gb comes from.

For reference, a 3.5mb file with 2 million rows has a temp table size of 150mb, which is okay given that the size should be around 130mb (65 bytes * 2 million = 130mb).

But this overhead grows exponentially, as stated, the overhead for 300mb, 313 million rows is 12gb.

Typically, our measurement files are 2.5gb in size, with over 1 billion rows alone, and can in the worst case be 100gb in size.

The overhead adds up pretty quickly, then.

We could reduce the row size a bit, though, by not storing a uuid in the signals table but a bigserial or even serial.

from yugabyte-db.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.