appsignal / appsignal-python Goto Github PK
View Code? Open in Web Editor NEWπ¦ AppSignal for Python package
Home Page: https://www.appsignal.com/
License: MIT License
π¦ AppSignal for Python package
Home Page: https://www.appsignal.com/
License: MIT License
Research whether it makes sense to add a test to the agent that uses the Python integration, maybe just a "smoke test". [Timebox: ?] If so, rewrite this issue with a to do list.
The docopt library is needed only for the dev environment but it is an unconditional dependency which the appsignal SDK brings on the production. In Python, each dependency matters.
An easy solution would be to introduce a cli
extra for that. However, currently, docopt does nothing for the project that cannot be done by the good old argparse. So, my suggestion is to rewrite that little CLI that the project has to argparse and remove docopt from the dependencies. I'd be happy to take care of it, I'm sure I can migrate it without sacrificing anything. But since it's quite an invasive change (I might miss something and you might have reasons for using docopt), I need a green light from you first.
An OpenTelemetry instrumentation for asyncpg exists: https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-asyncpg
Split from #38 (comment).
Allow people to specify the app environment via the CLI. It now defaults to development
which is not always what people want. And we don't want people to have to use the env var either.
--environment
CLI config option to diagnose command.--environment
CLI config option to demo command.As per https://github.com/appsignal/appsignal-docs/pull/1294, we should be supporting Python 3.7 and above. Currently we're only really running tests on Python 3.9 to 3.11. I recall the tests didn't run in earlier versions, but I don't think I took a close look at why: https://github.com/appsignal/appsignal-python/blob/main/.semaphore/semaphore.yml
We may have to not use some cool typing features to support Python 3.7. TypedDict
is a Python 3.8 feature. Unpack
is a Python 3.11 feature. (Which makes me wonder how the tests are passing on 3.9 π€) I can live with that.
The packages we publish already claim to work with those versions.
Add a helper that sets a magic attribute that contains a SQL query. It will then be auto sanitized by the agent.
appsignal.sql_body
to agentset_sql_body
to packageset_sql_body
helper on custom instrumentation pageappsignal.sql_body
attribute on OpenTelemetry attributes page_APPSIGNAL_PROCESS_NAME
Implement helpers to easily send custom metrics to AppSignal without having to understand OpenTelemetry metrics.
Ideally expose the same API as the Ruby implementation: https://docs.appsignal.com/metrics/custom.html#gauge
Note that we currently only support OpenTelemetry counters and gauges, not OpenTelemetry histograms (which would be AppSignal distribution values)
See #122 for sample data helpers. Will be documented as part of https://github.com/appsignal/appsignal-docs/issues/1492.
When all tests have passed, I've got a traceback from the OpenTelemetry client. Seems like there are races between the client and pytest shutting down the server. I can't quite reproduce it, though, it happened to me only once.
Exception while exporting Span batch.
Traceback (most recent call last):
File "/home/gram/.local/share/hatch/env/virtual/appsignal-beta/OY3JtW1S/appsignal-beta/lib/python3.10/site-packages/urllib3/connection.py", line 200, in _new_conn
sock = connection.create_connection(
File "/home/gram/.local/share/hatch/env/virtual/appsignal-beta/OY3JtW1S/appsignal-beta/lib/python3.10/site-packages/urllib3/util/connection.py", line 85, in create_connection
raise err
File "/home/gram/.local/share/hatch/env/virtual/appsignal-beta/OY3JtW1S/appsignal-beta/lib/python3.10/site-packages/urllib3/util/connection.py", line 73, in create_connection
sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/gram/.local/share/hatch/env/virtual/appsignal-beta/OY3JtW1S/appsignal-beta/lib/python3.10/site-packages/urllib3/connectionpool.py", line 790, in urlopen
response = self._make_request(
File "/home/gram/.local/share/hatch/env/virtual/appsignal-beta/OY3JtW1S/appsignal-beta/lib/python3.10/site-packages/urllib3/connectionpool.py", line 496, in _make_request
conn.request(
File "/home/gram/.local/share/hatch/env/virtual/appsignal-beta/OY3JtW1S/appsignal-beta/lib/python3.10/site-packages/urllib3/connection.py", line 388, in request
self.endheaders()
File "/usr/lib/python3.10/http/client.py", line 1277, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/lib/python3.10/http/client.py", line 1037, in _send_output
self.send(msg)
File "/usr/lib/python3.10/http/client.py", line 975, in send
self.connect()
File "/home/gram/.local/share/hatch/env/virtual/appsignal-beta/OY3JtW1S/appsignal-beta/lib/python3.10/site-packages/urllib3/connection.py", line 236, in connect
self.sock = self._new_conn()
File "/home/gram/.local/share/hatch/env/virtual/appsignal-beta/OY3JtW1S/appsignal-beta/lib/python3.10/site-packages/urllib3/connection.py", line 215, in _new_conn
raise NewConnectionError(
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f9a6b70f7f0>: Failed to establish a new connection: [Errno 111] Connection refused
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/gram/.local/share/hatch/env/virtual/appsignal-beta/OY3JtW1S/appsignal-beta/lib/python3.10/site-packages/requests/adapters.py", line 486, in send
resp = conn.urlopen(
File "/home/gram/.local/share/hatch/env/virtual/appsignal-beta/OY3JtW1S/appsignal-beta/lib/python3.10/site-packages/urllib3/connectionpool.py", line 844, in urlopen
retries = retries.increment(
File "/home/gram/.local/share/hatch/env/virtual/appsignal-beta/OY3JtW1S/appsignal-beta/lib/python3.10/site-packages/urllib3/util/retry.py", line 515, in increment
raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type]
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8099): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f9a6b70f7f0>: Failed to establish a new connection: [Errno 111] Connection refused'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/gram/.local/share/hatch/env/virtual/appsignal-beta/OY3JtW1S/appsignal-beta/lib/python3.10/site-packages/opentelemetry/sdk/trace/export/__init__.py", line 368, in _export_batch
self.span_exporter.export(self.spans_list[:idx]) # type: ignore
File "/home/gram/.local/share/hatch/env/virtual/appsignal-beta/OY3JtW1S/appsignal-beta/lib/python3.10/site-packages/opentelemetry/exporter/otlp/proto/http/trace_exporter/__init__.py", line 153, in export
resp = self._export(serialized_data)
File "/home/gram/.local/share/hatch/env/virtual/appsignal-beta/OY3JtW1S/appsignal-beta/lib/python3.10/site-packages/opentelemetry/exporter/otlp/proto/http/trace_exporter/__init__.py", line 124, in _export
return self._session.post(
File "/home/gram/.local/share/hatch/env/virtual/appsignal-beta/OY3JtW1S/appsignal-beta/lib/python3.10/site-packages/requests/sessions.py", line 637, in post
return self.request("POST", url, data=data, json=json, **kwargs)
File "/home/gram/.local/share/hatch/env/virtual/appsignal-beta/OY3JtW1S/appsignal-beta/lib/python3.10/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "/home/gram/.local/share/hatch/env/virtual/appsignal-beta/OY3JtW1S/appsignal-beta/lib/python3.10/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
File "/home/gram/.local/share/hatch/env/virtual/appsignal-beta/OY3JtW1S/appsignal-beta/lib/python3.10/site-packages/requests/adapters.py", line 519, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8099): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f9a6b70f7f0>: Failed to establish a new connection: [Errno 111] Connection refused'))
Currently, if you run multiple AppSignal for Python apps in the same machine with different configurations, only one agent will be able to report data, as only one agent will be able to listen for OpenTelemetry traces on port 8099.
[They will also step over each other as they will share the same working directory, but this will be fixable once #16 implements working_dir_path
.]
We discussed this in this Slack conversation. The easy fix is to implement some sort of opentelemetry_port
config option.
This issue already exists in the Python OpenTelemetry setup, so it's not a blocker for release.
opentelemetry_port
config option to:
statsd_port
config option for an exampleopentelemetry_port
and working_directory_path
config options to be set and how (unique per app).Add support for these instrumentation libraries.
βCreate a new issue for each instrumentation before picking it up.
The OpenTelemetry Flask integration supports request hooks. We should be able to build a [Flask Request] object from the WSGI environ present in the hooks and take the params if any to stamp them in the span.
Add a CLI command for the diagnose
diagnose
--environment
option? Other languages have this.--[no-]send-report
option to automatically send the reportCreate an issue for this if we want to split up the work.
Implement sample data helpers, like those implemented for Node.js.
The installer should install the relevant OpenTelemetry dependencies for your project and suggest relevant next steps after installation.
For example, if your requirements.txt
contains django
, then we should offer to add opentelemetry-instrumentation-django
to the requirements.txt
for you. This is important because the dependency being installed is what will cause AppSignal to start that instrumentation. The same could be done for all other instrumentations.
The installer should also give you potential "next steps" instructions based on those dependencies. For example, if it installed opentelemetry-instrumentation-django
for you, then it should suggest that you add the code snippet that initialises AppSignal to your manage.py
file. For many instrumentations, there will be no obvious "next step" that is more specific than the generic "add this to your application's entrypoint" -- that's okay.
I don't think we need to run pip install
for users after we add the new dependencies, but we could. I don't think we need to write the initialise code to manage.py
ourselves, either.
requirements.txt
requirements.txt
This is a wishlist -- if any part of this is hard, consider dropping it. For example, maybe we can just tell users which dependencies they need to add instead of writing them to the file ourselves.
A Flask OpenTelemetry instrumentation exists, and supporting it would be neat. If Django is Rails, then Flask is Sinatra: it's the "light-weight", DIY option for making a web application.
All Celery errors, even custom errors, seem to be reported as "ExceptionInfo" by OpenTelemetry:
Not a blocker for me right now, but something to be aware off and most certainly will have someone pop up in support about. This is something I think should be fixed in the Celery instrumentation, as that's what's reporting it with this wrapper error.
{
events: {
time_unix_nano: 1686813462797997050,
name: "exception",
attributes: {
key: "exception.type",
value: {
string_value: "ExceptionInfo",
},
},
attributes: {
key: "exception.message",
value: {
string_value: "Traceback (most recent call last):\n File \"/usr/local/lib/python3.11/site-packages/celery/app/trace.py\", line 477, in trace_task\n R = retval = fun(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.11/site-packages/celery/app/trace.py\", line 760, in __protected_call__\n return self.run(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/app/tasks.py\", line 43, in custom_error_task\n raise MyCeleryException(\"Custom Celery error\")\ntasks.MyCeleryException: Custom Celery error\n",
},
},
attributes: {
key: "exception.stacktrace",
value: {
string_value: "Traceback (most recent call last):\n File \"/usr/local/lib/python3.11/site-packages/celery/app/trace.py\", line 477, in trace_task\n R = retval = fun(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.11/site-packages/celery/app/trace.py\", line 760, in __protected_call__\n return self.run(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/app/tasks.py\", line 43, in custom_error_task\n raise MyCeleryException(\"Custom Celery error\")\ntasks.MyCeleryException: Custom Celery error\n",
},
},
attributes: {
key: "exception.escaped",
value: {
string_value: "False",
},
},
},
status: {
message: "Traceback (most recent call last):\n File \"/usr/local/lib/python3.11/site-packages/celery/app/trace.py\", line 477, in trace_task\n R = retval = fun(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.11/site-packages/celery/app/trace.py\", line 760, in __protected_call__\n return self.run(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/app/tasks.py\", line 43, in custom_error_task\n raise MyCeleryException(\"Custom Celery error\")\ntasks.MyCeleryException: Custom Celery error\n",
code: STATUS_CODE_ERROR
},
}
Proposal: Implement Celery task monitoring metrics.
Might requiere us to build a minutely probes system first: #129
There are configuration options that can be set in the current Python beta for the stand-alone agent or via OpenTelemetry but aren't currently implemented in the Python integration's options map. We should implement those.
Proposal: Implement Python garbage collection metrics?
main
branch.If I may suggest, GitHub Actions is a good choice for Github-based open-source projects. I don't like the way how GitHub Actions are configured, but they have the best integration with GitHub and none of the issues listed above.
Our beloved nitpicker is missing. π
Currently, we rely on our installer detecting the presence of certain dependencies and installing the right opentelemetry-instrumentation-something
package for them, or on users manually installing those dependencies as described on the docs.
We could use "package extras", which are optional dependencies that are specified at install time, to improve the experience somewhat. Instead of installing appsignal
and opentelemetry-instrumentation-django
, users could install appsignal[django]
, which would install both. The installer could also do this. (More than one can be installed -- e.g. appsignal[django,celery]
)
This also allows us to specify version bounds for the OpenTelemetry instrumentation package, ensuring that our users aren't upgrading their AppSignal package, but using an old (potentially unsupported or missing features) version of the instrumentations.
requirements.txt
An OpenTelemetry HTTPX instrumentation exists: https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-httpx
Release a first prerelease to https://pypi.org/
mono publish --alpha/--beta/--rc
--repo test
When sending any kind of error backtrace the python language_integration_version isn't being picked up as Python for some reason.
The appsignal.log seems correct to list: language_integration_version: Some("python-0.1.1")
But the backtrace has the wrong format:
[2023-06-07T12:23:28 (agent) #413][Trace] Add OpenTelemetry exception to span: name: "ValueError" message: "Something went wrong" backtrace_json: "Traceback (most recent call last):\n File \"/usr/local/lib/python3.11/site-packages/appsignal/cli.py\", line 299, in run\n raise ValueError(\"Something went wrong\")\nValueError: Something went wrong\n"
It should be using backtrace
, not backtrace_json
, and not include lines that don't start with File
.
Something must be wrong with the language detection, but we set the _APPSIGNAL_LANGUAGE_INTEGRATION_VERSION
correctly.
log
option. Possible values: file
(default) and stdout
An SQLAlchemy OpenTelemetry instrumentation exists, and supporting it would be neat. SQLAlchemy would be the Sequel equivalent: a featureful ORM that isn't bundled in with a specific framework, but instead is well suited to being part of your DIY web application setup.
The default environment is set to development
, but it's not picked up by default for the agent tests. See also #138.
$ python -m appsignal diagnose
Agent diagnostics
Agent tests
Started: started
Process user id: 501
Process user group id: 20
Configuration: invalid
Error: RequiredEnvVarNotPresent("APPSIGNAL_APP_ENV")
Logger: -
Working directory user id: -
Working directory user group id: -
Working directory permissions: -
Lock path: -
Configuration
environment: 'development'
Our other integrations have FreeBSD support, but it was not implemented when adding other platform support because there was no easy way to test it. This issue is about implementing it and testing it the hard way, with a FreeBSD VirtualBox.
agent.py
, while the platform tag is "simply" the output of some platform-dependent Python function in that platform, except for all the cases where it is not. π I tried looking for an existing example in PyPI's package list, filtering by the FreeBSD support category, but couldn't find one. You may need to boot a FreeBSD machine to figure out what the right value is.pip install appsignal
(with the --index-url
bits to force installation from TestPyPI) installs the right version of the package. Make sure the agent boots.Alternative todo list if the above is ETOOHARD:
Blocked by https://github.com/appsignal/appsignal-agent/issues/1042
Add similar system like we have for other integrations: https://docs.appsignal.com/ruby/instrumentation/minutely-probes.html
To provide a complete picture in the diagnose report, the diagnose CLI should load the __appsignal__.py
config file. Without it, we have an incomplete configuration and can't properly diagnose it with all the information.
__appsignal__.py
file in the diagnose CLI when it's found and make sure it's config is part of the diagnose report.__appsignal__.py
file in the demo CLI when it's found.A FastAPI OpenTelemetry instrumentation exists, and supporting it would be neat. If Flask is Express, then FastAPI is NestJS: it adds more structure (and built-in typing support!) without being a full-fledged "my way or the highway" framework.
The AppSignal changelog needs to allow for Python changelog entries. This also means we need to pick a color for Python! π§βπ¨
Add metrics to the Python integration using OpenTelemetry.
When this is done we have custom metrics support which is good enough. Close the issue and spin the next steps off into separate issues.
It seems that most Python projects have migrated to using pyproject.toml
. If a pyproject.toml
file exists, our installer should probably write the dependencies to that file, instead of to requirements.txt
.
(In the absence of either of the files, creating a requirements.txt
is fine, as creating a pyproject.toml
is a whole thing)
pyproject.toml
and uses it when it existspyproject.toml
alongside requirements.txt
for manual installationInclude the cacert.pem file, as included with other integrations.
ca_file_path
config option.v0.2
to appsignal
with the latest changes, including the package name change.v0.2
to appsignal-beta
as well, with the changes above, as well as a change that, on import, prints to standard error, prompting you to change your dependency to the appsignal
package. This will be the last appsignal-beta
package published.This provides a way for users to change the package name in their dependencies, without dragging in any other changes (v0.2
on appsignal-beta
-> v0.2
on appsignal
)
appsignal-beta package has been moved to this repo: https://github.com/appsignal/appsignal-python-beta
appsignal-beta
to appsignal
on main
-- #112appsignal v0.2
(but not appsignal-beta
)Releasing an alpha version of the package to PyPI would allow us to test the publishing process, give us confidence that pip
selects the right package based on the combination platform tag, and it's a necessary step towards getting this to a state where customers can use it.
Note that PyPI has a "test register" that you can upload to and install from. We should probably use that for testing the release process: https://packaging.python.org/en/latest/tutorials/packaging-projects/#uploading-the-distribution-archives
Feel free to split any of these into its own issue. It might also be convenient to do it in the opposite order -- release an alpha to PyPI first, using Hatch, then have that experience inform how Hatch support for Mono should be.
Keep in mind that the way we use Hatch is a bit unusual, so this might be less "implement Python/Hatch support for Mono" and more "implement custom scripts support for Mono"?
Investigate whether we should and can do this.
Python's built-in logging
package allows for log messages to include exception information.
If the logging method is called from an exception handler (an except
block) with the exc_info=True
parameter, then it will report information about the currently handled exception. (A specific exception instance can also be passed as the value for exc_info
)
There is also a logger.exception()
method, which already sets exc_info=True
, and which is meant to be used from within exception handlers.
We could implement a logging handler which, if an exception is present, reports it to AppSignal (doing the equivalent of send_error()
for it)
Would this be useful? Perhaps this would also report exceptions that are not relevant to the user. Users might prefer to have finer control over it by calling appsignal.send_error()
manually.
We could patch the logging module (OpenTelemetry contrib does it, to implement some other functionality) to add this to all loggers, or we could provide a handler that allows users to add it to their logger.
Currently these integrations show no params. It would be nice to try and get params showing, ideally both from the request body JSON and from the path parameters.
Add an install command to make the installation easier.
install
Tracking issue for first and subsequent Python package releases.
Add a demo command to test AppSignal.
demo
appsignal install
#18See if we can implement logging for the Python integration.
Update readme so that the package is titled "AppSignal for Python" and replace:
The AppSignal package collects exceptions and performance data from your Ruby applications and sends it to AppSignal for analysis. Get alerted when an error occurs or an endpoint is responding very slowly.
with:
The AppSignal package collects exceptions and performance data from your Python applications and sends it to AppSignal for analysis. Get alerted when an error occurs or an endpoint is responding very slowly.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. πππ
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google β€οΈ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.