Giter Site home page Giter Site logo

Comments (31)

ku1ik avatar ku1ik commented on June 13, 2024 2

I'll release a new version with the fix soon.

from asciinema-server.

ku1ik avatar ku1ik commented on June 13, 2024 2

Thank you @jiriks74! I'm observing the same on asciinema.org so we can safely assume the problem has been solved. Thanks for the assistance! 🤝

from asciinema-server.

dmitry-j-mikhin avatar dmitry-j-mikhin commented on June 13, 2024 1

@ku1ik thanks for the feedback and for this wonderful project. If you need more information, logs, etc. I can also provide.

from asciinema-server.

ku1ik avatar ku1ik commented on June 13, 2024 1

Thanks. Yeah, the problem is definitely there. I'm trying to pinpoint it now.

from asciinema-server.

ku1ik avatar ku1ik commented on June 13, 2024 1

I nailed it. This is mem graph from asciinema.org, with the prometheus endpoint disabled. Note the last quarter - flat as a table :)

image

from asciinema-server.

jiriks74 avatar jiriks74 commented on June 13, 2024 1

Using the new release seems to be working. It's been only cca 30 minutes but I can confirm that I cannot see the memory leak trend as I did before.

Old release

image

New release

image

Note

The spike on the second graph is when I updated to the new release and started the server.

Both graphs are over an interval of 30 minutes.

from asciinema-server.

dmitry-j-mikhin avatar dmitry-j-mikhin commented on June 13, 2024

v20240203 also has this issue, the memory usage of the beam.smp process is constantly increasing.

from asciinema-server.

ku1ik avatar ku1ik commented on June 13, 2024

Thanks for the report. Great that you have the graph, this indeed looks like a leak. I'll try to reproduce that locally.

from asciinema-server.

jiriks74 avatar jiriks74 commented on June 13, 2024

I'm currently use the latest tag in docker so I don't know what exact version I'm on.

But I can confirm that this indeed happens. I've been running the server for about 4 days now and I've noticed the memory usage steadily climbing. I've restarted the container one time to see if it was just a fluke but the memory usage is still climbing like before. I'll probably make a cron job to restart it at midnight every day to get around this until it's patched.

Also: there's no usage at all. I have a single account on this instance, here are no recordings (apart from the welcome one), no recordings are shared so no websites are loading them and I haven't used it for about 2 days now but it's still going up.

Here's my graph:
image

If you're interested I can make some tests, etc., as it's a personal instance where downtime won't annoy anyone.

from asciinema-server.

jiriks74 avatar jiriks74 commented on June 13, 2024

I just modified my view so that I can filter containers more easily and I also found a way to capture non-visibile parts of a website in Firefox (the built in screenshot tool is still broken) so here's a better screenshot. You can also see the network usage to have an idea of how much this instance is not used.

asciinema-memory_leak

Note

If you see multiple containers names asciinema-asciinema-1 it means that there was a container recreation/restart. When I recreate/restart the container some stats get a different ID. It's still the same thing.

from asciinema-server.

ku1ik avatar ku1ik commented on June 13, 2024

Does the mem usage grow indefinitely, or does it top at some value, and go down and the up again?

from asciinema-server.

ku1ik avatar ku1ik commented on June 13, 2024

I can observe similar behavior on asciinema.org:

image

from asciinema-server.

jiriks74 avatar jiriks74 commented on June 13, 2024

Does the mem usage grow indefinitely, or does it top at some value, and go down and the up again?

So far it's indefinitely until I restart it.

from asciinema-server.

dmitry-j-mikhin avatar dmitry-j-mikhin commented on June 13, 2024

So far it's indefinitely until I restart it.

Or until OOM-killer kills the process eating up all the memory)

from asciinema-server.

dmitry-j-mikhin avatar dmitry-j-mikhin commented on June 13, 2024

These are my 30d graphs:
image

from asciinema-server.

jiriks74 avatar jiriks74 commented on June 13, 2024

I see that about the time that the v20231217 version was reported the admin dashboard was added:

added admin console endpoint in port 4002, with Phoenix LiveDashboard at http://localhost:4002/dashboard

https://github.com/asciinema/asciinema-server/releases/tag/v20231216

Maybe that could be the culpruit?

from asciinema-server.

ku1ik avatar ku1ik commented on June 13, 2024

I think it's the built-in prometheus endpoint (http://IP:9568/metrics), which when not queried, accumulates aggregated data in ETS table.

from asciinema-server.

ku1ik avatar ku1ik commented on June 13, 2024

That's likely it: beam-telemetry/telemetry_metrics_prometheus_core#52

from asciinema-server.

jiriks74 avatar jiriks74 commented on June 13, 2024

I agree. I exposed the dashboard (recreated the container) and then opened it after a while. The usage has not gone up for an hour now (though it can be too little time for it to be visible):

image

Note

The memory spike is when I recreated the container to expose port 4002

from asciinema-server.

jiriks74 avatar jiriks74 commented on June 13, 2024

I'll let this run overnight to see how the memory usage behaves and whether opening the dashboard causes the memory to be freed.

from asciinema-server.

jiriks74 avatar jiriks74 commented on June 13, 2024

I did a quick test before I went to sleep and after opening the dashboard the memory usage fell down. I can confirm that the admin panel feature is what is the cause here.

image

Note

RSS memory usage didn't go down. This may be caused by my 2GB swap that is 80% free for the majority of time. From reading the issue upstream it looks like the metrics are not flushed from memory until they're loaded by the panel. Since the metrics are static data it would make sense that Linux would move it to swap.

from asciinema-server.

jiriks74 avatar jiriks74 commented on June 13, 2024

Would gmthere be a config option for it? If I'd be able to monitor things like the number of recordings, etc, I may integrate it to my monitoring stack.

from asciinema-server.

ku1ik avatar ku1ik commented on June 13, 2024

The built-in prometheus endpoint provided the following stats:

[
# VM
last_value("vm.memory.total", unit: :byte),
last_value("vm.total_run_queue_lengths.total"),
last_value("vm.total_run_queue_lengths.cpu"),
last_value("vm.total_run_queue_lengths.io"),
# Ecto
distribution("asciinema.repo.query.total_time", repo_distribution),
distribution("asciinema.repo.query.decode_time", repo_distribution),
distribution("asciinema.repo.query.query_time", repo_distribution),
distribution("asciinema.repo.query.idle_time", repo_distribution),
distribution("asciinema.repo.query.queue_time", repo_distribution),
# Phoenix
distribution("phoenix.endpoint.start.system_time", phoenix_distribution),
distribution("phoenix.endpoint.stop.duration", phoenix_distribution),
distribution("phoenix.router_dispatch.start.system_time", phoenix_distribution),
distribution("phoenix.router_dispatch.exception.duration", phoenix_distribution),
distribution("phoenix.router_dispatch.stop.duration", phoenix_distribution),
distribution("phoenix.socket_connected.duration", phoenix_distribution),
distribution("phoenix.channel_join.duration", phoenix_distribution),
distribution("phoenix.channel_handled_in.duration", phoenix_distribution),
# Oban
counter("oban.job.start.count", oban_counter),
distribution("oban.job.stop.duration", oban_distribution),
distribution("oban.job.exception.duration", oban_distribution)
]
- so, BEAM VM mem/cpu stats, HTTP request stats, database query stats and background job stats. It didn't have application-level stats like recordings count etc.

I removed it for now to solve the leak, especially that it was undocumented and nobody used it (including me).

I may re-add it in the future, with some more useful stats, and with an explicit config option to enable it.

from asciinema-server.

jiriks74 avatar jiriks74 commented on June 13, 2024

Yeah it was weird for me when I saw it in the logs (admin panel listening on 4002) while docs said there's no admin panel.

And these stats are useless for me since I gather these things a other way. CPU, memory and network Rx/Tx are gathered by Docker and cAdvisor, HTTP requests are gathered by Traefik.

from asciinema-server.

ku1ik avatar ku1ik commented on June 13, 2024

Yeah it was weird for me when I saw it in the logs (admin panel listening on 4002) while docs said there's no admin panel.

Both the dashboard and the /metrics endpoint report similar metrics, from the same source.

That admin panel is still there, and it was not the cause of the leak. This panel is a basic dashboard with some metrics, but it doesn't use any extra resources (not noticeably). It doesn't hurt when it's not used.

The problem was the prometheus /metrics endpoint, which I now removed.

from asciinema-server.

jiriks74 avatar jiriks74 commented on June 13, 2024

How does the /metrics endpoint work? I've accessed :4002/metrics and I get a 404. I'd just like to see whether I could fetch and discard the stats until there's a new release.

from asciinema-server.

ku1ik avatar ku1ik commented on June 13, 2024

@jiriks74 this one runs on its own port, 9568.

from asciinema-server.

ku1ik avatar ku1ik commented on June 13, 2024

@jiriks74 the new release is out (docker image), I'm just wrapping up the release notes.

from asciinema-server.

ku1ik avatar ku1ik commented on June 13, 2024

@jiriks74 https://github.com/asciinema/asciinema-server/releases/tag/v20240515

from asciinema-server.

jiriks74 avatar jiriks74 commented on June 13, 2024

Thanks <3

from asciinema-server.

jiriks74 avatar jiriks74 commented on June 13, 2024

Hello, here's my last comment on this, i promise. Since memory leaks can sometimes be pain I was monitoring whether something would come up. I can now confirm that there hasn't been any significant memory usage over the last 24 hours. Below is a graph over 48 hours to see the differences between the old and new releases more easily.

image

from asciinema-server.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.