Giter Site home page Giter Site logo

appsignal / appsignal-python Goto Github PK

View Code? Open in Web Editor NEW
6.0 3.0 2.0 959 KB

🟦 AppSignal for Python package

Home Page: https://www.appsignal.com/

License: MIT License

Python 100.00%
apm appsignal django error-monitoring host-metrics monitoring performance-monitoring python

appsignal-python's Issues

Add Python agent tests?

Research whether it makes sense to add a test to the agent that uses the Python integration, maybe just a "smoke test". [Timebox: ?] If so, rewrite this issue with a to do list.

Get rid of docopt

The docopt library is needed only for the dev environment but it is an unconditional dependency which the appsignal SDK brings on the production. In Python, each dependency matters.

An easy solution would be to introduce a cli extra for that. However, currently, docopt does nothing for the project that cannot be done by the good old argparse. So, my suggestion is to rewrite that little CLI that the project has to argparse and remove docopt from the dependencies. I'd be happy to take care of it, I'm sure I can migrate it without sacrificing anything. But since it's quite an invasive change (I might miss something and you might have reasons for using docopt), I need a green light from you first.

Comply with our Python maintenance policy

As per https://github.com/appsignal/appsignal-docs/pull/1294, we should be supporting Python 3.7 and above. Currently we're only really running tests on Python 3.9 to 3.11. I recall the tests didn't run in earlier versions, but I don't think I took a close look at why: https://github.com/appsignal/appsignal-python/blob/main/.semaphore/semaphore.yml

We may have to not use some cool typing features to support Python 3.7. TypedDict is a Python 3.8 feature. Unpack is a Python 3.11 feature. (Which makes me wonder how the tests are passing on 3.9 πŸ€”) I can live with that.

The packages we publish already claim to work with those versions.

To do

  • Figure out what changes need to be done to support Python 3.8
  • Add those versions to the Semaphore test matrix

Add set_sql_body helper

Add a helper that sets a magic attribute that contains a SQL query. It will then be auto sanitized by the agent.

To do

Metric helpers

Implement helpers to easily send custom metrics to AppSignal without having to understand OpenTelemetry metrics.

Ideally expose the same API as the Ruby implementation: https://docs.appsignal.com/metrics/custom.html#gauge

Note that we currently only support OpenTelemetry counters and gauges, not OpenTelemetry histograms (which would be AppSignal distribution values)

See #122 for sample data helpers. Will be documented as part of https://github.com/appsignal/appsignal-docs/issues/1492.

OpenTelemetry client may fail after all tests finish

When all tests have passed, I've got a traceback from the OpenTelemetry client. Seems like there are races between the client and pytest shutting down the server. I can't quite reproduce it, though, it happened to me only once.

Exception while exporting Span batch.
Traceback (most recent call last):
  File "/home/gram/.local/share/hatch/env/virtual/appsignal-beta/OY3JtW1S/appsignal-beta/lib/python3.10/site-packages/urllib3/connection.py", line 200, in _new_conn
    sock = connection.create_connection(
  File "/home/gram/.local/share/hatch/env/virtual/appsignal-beta/OY3JtW1S/appsignal-beta/lib/python3.10/site-packages/urllib3/util/connection.py", line 85, in create_connection
    raise err
  File "/home/gram/.local/share/hatch/env/virtual/appsignal-beta/OY3JtW1S/appsignal-beta/lib/python3.10/site-packages/urllib3/util/connection.py", line 73, in create_connection
    sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/gram/.local/share/hatch/env/virtual/appsignal-beta/OY3JtW1S/appsignal-beta/lib/python3.10/site-packages/urllib3/connectionpool.py", line 790, in urlopen
    response = self._make_request(
  File "/home/gram/.local/share/hatch/env/virtual/appsignal-beta/OY3JtW1S/appsignal-beta/lib/python3.10/site-packages/urllib3/connectionpool.py", line 496, in _make_request
    conn.request(
  File "/home/gram/.local/share/hatch/env/virtual/appsignal-beta/OY3JtW1S/appsignal-beta/lib/python3.10/site-packages/urllib3/connection.py", line 388, in request
    self.endheaders()
  File "/usr/lib/python3.10/http/client.py", line 1277, in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
  File "/usr/lib/python3.10/http/client.py", line 1037, in _send_output
    self.send(msg)
  File "/usr/lib/python3.10/http/client.py", line 975, in send
    self.connect()
  File "/home/gram/.local/share/hatch/env/virtual/appsignal-beta/OY3JtW1S/appsignal-beta/lib/python3.10/site-packages/urllib3/connection.py", line 236, in connect
    self.sock = self._new_conn()
  File "/home/gram/.local/share/hatch/env/virtual/appsignal-beta/OY3JtW1S/appsignal-beta/lib/python3.10/site-packages/urllib3/connection.py", line 215, in _new_conn
    raise NewConnectionError(
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f9a6b70f7f0>: Failed to establish a new connection: [Errno 111] Connection refused

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/gram/.local/share/hatch/env/virtual/appsignal-beta/OY3JtW1S/appsignal-beta/lib/python3.10/site-packages/requests/adapters.py", line 486, in send
    resp = conn.urlopen(
  File "/home/gram/.local/share/hatch/env/virtual/appsignal-beta/OY3JtW1S/appsignal-beta/lib/python3.10/site-packages/urllib3/connectionpool.py", line 844, in urlopen
    retries = retries.increment(
  File "/home/gram/.local/share/hatch/env/virtual/appsignal-beta/OY3JtW1S/appsignal-beta/lib/python3.10/site-packages/urllib3/util/retry.py", line 515, in increment
    raise MaxRetryError(_pool, url, reason) from reason  # type: ignore[arg-type]
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8099): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f9a6b70f7f0>: Failed to establish a new connection: [Errno 111] Connection refused'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/gram/.local/share/hatch/env/virtual/appsignal-beta/OY3JtW1S/appsignal-beta/lib/python3.10/site-packages/opentelemetry/sdk/trace/export/__init__.py", line 368, in _export_batch
    self.span_exporter.export(self.spans_list[:idx])  # type: ignore
  File "/home/gram/.local/share/hatch/env/virtual/appsignal-beta/OY3JtW1S/appsignal-beta/lib/python3.10/site-packages/opentelemetry/exporter/otlp/proto/http/trace_exporter/__init__.py", line 153, in export
    resp = self._export(serialized_data)
  File "/home/gram/.local/share/hatch/env/virtual/appsignal-beta/OY3JtW1S/appsignal-beta/lib/python3.10/site-packages/opentelemetry/exporter/otlp/proto/http/trace_exporter/__init__.py", line 124, in _export
    return self._session.post(
  File "/home/gram/.local/share/hatch/env/virtual/appsignal-beta/OY3JtW1S/appsignal-beta/lib/python3.10/site-packages/requests/sessions.py", line 637, in post
    return self.request("POST", url, data=data, json=json, **kwargs)
  File "/home/gram/.local/share/hatch/env/virtual/appsignal-beta/OY3JtW1S/appsignal-beta/lib/python3.10/site-packages/requests/sessions.py", line 589, in request
    resp = self.send(prep, **send_kwargs)
  File "/home/gram/.local/share/hatch/env/virtual/appsignal-beta/OY3JtW1S/appsignal-beta/lib/python3.10/site-packages/requests/sessions.py", line 703, in send
    r = adapter.send(request, **kwargs)
  File "/home/gram/.local/share/hatch/env/virtual/appsignal-beta/OY3JtW1S/appsignal-beta/lib/python3.10/site-packages/requests/adapters.py", line 519, in send
    raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8099): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f9a6b70f7f0>: Failed to establish a new connection: [Errno 111] Connection refused'))

Running multiple apps on one machine

Currently, if you run multiple AppSignal for Python apps in the same machine with different configurations, only one agent will be able to report data, as only one agent will be able to listen for OpenTelemetry traces on port 8099.

[They will also step over each other as they will share the same working directory, but this will be fixable once #16 implements working_dir_path.]

We discussed this in this Slack conversation. The easy fix is to implement some sort of opentelemetry_port config option.

This issue already exists in the Python OpenTelemetry setup, so it's not a blocker for release.

To do

  • Add opentelemetry_port config option to:
    • Agent config, see also statsd_port config option for an example
    • Python package: #118
  • Document
    • Config option in Python
    • The need for opentelemetry_port and working_directory_path config options to be set and how (unique per app).

Support more Python instrumentations

Add support for these instrumentation libraries.

To do's per instrumentation

❗Create a new issue for each instrumentation before picking it up.

Instrumentations

Record Flask request parameters

The OpenTelemetry Flask integration supports request hooks. We should be able to build a [Flask Request] object from the WSGI environ present in the hooks and take the params if any to stamp them in the span.

ToDo

  • Use the request hooks for Flask to stamp request parameters in the span

Add diagnose command

Add a CLI command for the diagnose

To do, minimum

  • Add CLI command: diagnose
  • Print output about (check the integrations guide and other languages for the specifics):
    • Library and language
    • Host
    • Agent diagnose report
    • Configuration
    • Push API key validation
    • Paths
  • Add an --environment option? Other languages have this.
  • Add a diagnose transmitter
  • Add a --[no-]send-report option to automatically send the report
  • Add Python to the diagnose_tests repo and add it as a submodule

To do, secondary

Create an issue for this if we want to split up the work.

  • Update the front-end to render the Python diagnose report

Installer should offer to install relevant dependencies

The installer should install the relevant OpenTelemetry dependencies for your project and suggest relevant next steps after installation.

For example, if your requirements.txt contains django, then we should offer to add opentelemetry-instrumentation-django to the requirements.txt for you. This is important because the dependency being installed is what will cause AppSignal to start that instrumentation. The same could be done for all other instrumentations.

The installer should also give you potential "next steps" instructions based on those dependencies. For example, if it installed opentelemetry-instrumentation-django for you, then it should suggest that you add the code snippet that initialises AppSignal to your manage.py file. For many instrumentations, there will be no obvious "next step" that is more specific than the generic "add this to your application's entrypoint" -- that's okay.

I don't think we need to run pip install for users after we add the new dependencies, but we could. I don't think we need to write the initialise code to manage.py ourselves, either.

To do

  • Detect relevant existing dependencies from requirements.txt
  • Write relevant OpenTelemetry dependencies to requirements.txt
  • Suggest relevant next steps based on dependencies: install/config steps or link to docs

This is a wishlist -- if any part of this is hard, consider dropping it. For example, maybe we can just tell users which dependencies they need to add instead of writing them to the file ourselves.

Implement Flask instrumentation support

A Flask OpenTelemetry instrumentation exists, and supporting it would be neat. If Django is Rails, then Flask is Sinatra: it's the "light-weight", DIY option for making a web application.

To do

Errors from Celery tasks are all reported as ExceptionInfo making grouping bad

All Celery errors, even custom errors, seem to be reported as "ExceptionInfo" by OpenTelemetry:
Not a blocker for me right now, but something to be aware off and most certainly will have someone pop up in support about. This is something I think should be fixed in the Celery instrumentation, as that's what's reporting it with this wrapper error.

2023-06-15 09-26-24 Errors - pythondjango4-celery - AppSignal

OpenTelemetry payload

{
events: {
  time_unix_nano: 1686813462797997050,
  name: "exception",
  attributes: {
    key: "exception.type",
    value: {
      string_value: "ExceptionInfo",
    },
  },
  attributes: {
    key: "exception.message",
    value: {
      string_value: "Traceback (most recent call last):\n  File \"/usr/local/lib/python3.11/site-packages/celery/app/trace.py\", line 477, in trace_task\n    R = retval = fun(*args, **kwargs)\n                 ^^^^^^^^^^^^^^^^^^^^\n  File \"/usr/local/lib/python3.11/site-packages/celery/app/trace.py\", line 760, in __protected_call__\n    return self.run(*args, **kwargs)\n           ^^^^^^^^^^^^^^^^^^^^^^^^^\n  File \"/app/tasks.py\", line 43, in custom_error_task\n    raise MyCeleryException(\"Custom Celery error\")\ntasks.MyCeleryException: Custom Celery error\n",
    },
  },
  attributes: {
    key: "exception.stacktrace",
    value: {
      string_value: "Traceback (most recent call last):\n  File \"/usr/local/lib/python3.11/site-packages/celery/app/trace.py\", line 477, in trace_task\n    R = retval = fun(*args, **kwargs)\n                 ^^^^^^^^^^^^^^^^^^^^\n  File \"/usr/local/lib/python3.11/site-packages/celery/app/trace.py\", line 760, in __protected_call__\n    return self.run(*args, **kwargs)\n           ^^^^^^^^^^^^^^^^^^^^^^^^^\n  File \"/app/tasks.py\", line 43, in custom_error_task\n    raise MyCeleryException(\"Custom Celery error\")\ntasks.MyCeleryException: Custom Celery error\n",
    },
  },
  attributes: {
    key: "exception.escaped",
    value: {
      string_value: "False",
    },
  },
},
status: {
  message: "Traceback (most recent call last):\n  File \"/usr/local/lib/python3.11/site-packages/celery/app/trace.py\", line 477, in trace_task\n    R = retval = fun(*args, **kwargs)\n                 ^^^^^^^^^^^^^^^^^^^^\n  File \"/usr/local/lib/python3.11/site-packages/celery/app/trace.py\", line 760, in __protected_call__\n    return self.run(*args, **kwargs)\n           ^^^^^^^^^^^^^^^^^^^^^^^^^\n  File \"/app/tasks.py\", line 43, in custom_error_task\n    raise MyCeleryException(\"Custom Celery error\")\ntasks.MyCeleryException: Custom Celery error\n",
  code: STATUS_CODE_ERROR
},
}

Implement missing configuration options

There are configuration options that can be set in the current Python beta for the stand-alone agent or via OpenTelemetry but aren't currently implemented in the Python integration's options map. We should implement those.

  • active - bool - #28
  • environment - bool - #28
  • ca_file_path - #35
  • dns_servers - list - #31
  • enable_host_metrics - bool - #28
  • enable_nginx_metrics - bool - #28
  • enable_opentelemetry_http (always on) - bool - #28
  • enable_statsd - bool - #28
  • endpoint - #33
  • files_world_accessible - bool - #28
  • filter_parameters - list - #31
  • filter_session_data - list - #31
  • hostname #22
  • http_proxy - #33
  • ignore_actions - list - #31
  • ignore_errors - list - #31
  • ignore_namespaces - list - #31
  • log - see #34
  • log_level - #37
  • log_path
  • name
  • push_api_key
  • request_headers - list - #39
  • revision
  • running_in_container - bool - #28
  • send_environment_metadata - bool - #28
  • send_params - bool - #28
  • send_session_data - bool - #28
  • working_directory_path - #33

Fix CI

  1. CI seems to not run for my PRs.
  2. CI seems to fail in the main branch.
  3. CI output is not available for non-maintainers. When I press on Github "Details", it sends me to a workflows page that simply shows 404.

If I may suggest, GitHub Actions is a good choice for Github-based open-source projects. I don't like the way how GitHub Actions are configured, but they have the best integration with GitHub and none of the issues listed above.

Add Lintje

Our beloved nitpicker is missing. πŸ˜›

To do

  • Add it to the Semaphore CI

Define optional dependencies as extras

Currently, we rely on our installer detecting the presence of certain dependencies and installing the right opentelemetry-instrumentation-something package for them, or on users manually installing those dependencies as described on the docs.

We could use "package extras", which are optional dependencies that are specified at install time, to improve the experience somewhat. Instead of installing appsignal and opentelemetry-instrumentation-django, users could install appsignal[django], which would install both. The installer could also do this. (More than one can be installed -- e.g. appsignal[django,celery])

This also allows us to specify version bounds for the OpenTelemetry instrumentation package, ensuring that our users aren't upgrading their AppSignal package, but using an old (potentially unsupported or missing features) version of the instrumentations.

  • Define optional dependencies as extras
  • Change the docs to specify installation of extras instead of OTel packages
  • Change the installer to write extras instead of OTel packages to requirements.txt

Refactor config

To do

  • Add a Config class
  • That config class has an Options dict
  • It stores each source (system, file/initial, env) as a separate source on the config class
  • It merges the config options in the class
  • It validates the config options in the class

Backtraces are not detected as Python backtraces

When sending any kind of error backtrace the python language_integration_version isn't being picked up as Python for some reason.

The appsignal.log seems correct to list: language_integration_version: Some("python-0.1.1")
But the backtrace has the wrong format:

[2023-06-07T12:23:28 (agent) #413][Trace] Add OpenTelemetry exception to span: name: "ValueError" message: "Something went wrong" backtrace_json: "Traceback (most recent call last):\n  File \"/usr/local/lib/python3.11/site-packages/appsignal/cli.py\", line 299, in run\n    raise ValueError(\"Something went wrong\")\nValueError: Something went wrong\n"

It should be using backtrace, not backtrace_json, and not include lines that don't start with File.

Something must be wrong with the language detection, but we set the _APPSIGNAL_LANGUAGE_INTEGRATION_VERSION correctly.

Add internal logger

To do

  • Add log option. Possible values: file (default) and stdout
  • Add integrations logger for the package. Support:
    • STDOUT logger, optional: can be separate issue
    • File logger (default)
  • Add log messages in places it makes sense

Ignore queries on boot

When starting the Django test app we create a bunch of incidents for queries. Ignore them.

image

Implement SQLAlchemy support

An SQLAlchemy OpenTelemetry instrumentation exists, and supporting it would be neat. SQLAlchemy would be the Sequel equivalent: a featureful ORM that isn't bundled in with a specific framework, but instead is well suited to being part of your DIY web application setup.

To do

  • Create a test setup for the instrumentation (or add it to a Flask/FastAPI test setup)
    • Flask + SQLAlchemy
    • FastAPI + SQLModel (thin wrapper over SQLAlchemy, should be compatible)
  • Write an extractor for the agent
  • Add an instrumentation adder for the package
  • Document it

Diagnose CLI tool doesn't pick up default environment option

The default environment is set to development, but it's not picked up by default for the agent tests. See also #138.

$ python -m appsignal diagnose
Agent diagnostics
  Agent tests
    Started: started
    Process user id: 501
    Process user group id: 20
    Configuration: invalid
        Error: RequiredEnvVarNotPresent("APPSIGNAL_APP_ENV")
    Logger: -
    Working directory user id: -
    Working directory user group id: -
    Working directory permissions: -
    Lock path: -

Configuration
  environment: 'development'

Support FreeBSD

Our other integrations have FreeBSD support, but it was not implemented when adding other platform support because there was no easy way to test it. This issue is about implementing it and testing it the hard way, with a FreeBSD VirtualBox.

To do

  • Add FreeBSD entry (or entries) to the build-triple-to-platform-tag map.
    • In this map, the key is the Rust-like build triple, as it appears on agent.py, while the platform tag is "simply" the output of some platform-dependent Python function in that platform, except for all the cases where it is not. πŸ™ƒ I tried looking for an existing example in PyPI's package list, filtering by the FreeBSD support category, but couldn't find one. You may need to boot a FreeBSD machine to figure out what the right value is.
    • If you can figure out if the "manylinux" spec somehow applies for FreeBSD, that would be neat, as it would allow us to specify which libc we depend on. I don't think it does, though.
  • Release a build with FreeBSD artifacts to TestPyPI.
  • Test with a FreeBSD virtual machine that pip install appsignal (with the --index-url bits to force installation from TestPyPI) installs the right version of the package. Make sure the agent boots.

Alternative todo list if the above is ETOOHARD:

Diagnose CLI should load __appsignal__.py if present

To provide a complete picture in the diagnose report, the diagnose CLI should load the __appsignal__.py config file. Without it, we have an incomplete configuration and can't properly diagnose it with all the information.

To do

  • Load the __appsignal__.py file in the diagnose CLI when it's found and make sure it's config is part of the diagnose report.
  • Load the __appsignal__.py file in the demo CLI when it's found.

Implement FastAPI instrumentation support

A FastAPI OpenTelemetry instrumentation exists, and supporting it would be neat. If Flask is Express, then FastAPI is NestJS: it adds more structure (and built-in typing support!) without being a full-fledged "my way or the highway" framework.

To do

Implement metrics

Add metrics to the Python integration using OpenTelemetry.

To do

  • Research how to implement metrics
    • OpenTelemetry metrics
      • Add OpenTelemetry metrics support to the agent (include the OpenTelemetry metrics protobuf)
  • Convert absolute counters to incremental counters. Now it's stored as a gauge, but we should calculate the delta and send that. -- https://github.com/appsignal/appsignal-agent/pull/1050
  • Add metrics exporter to the Python package - #121
  • Document how to use OpenTelemetry metrics to report custom metrics.
    • Document which OpenTelemetry metric types we convert to which AppSignal metric types.
    • Add note that we don't support histograms.

When this is done we have custom metrics support which is good enough. Close the issue and spin the next steps off into separate issues.

Next steps

Prefer `pyproject.toml` over `requirements.txt`

It seems that most Python projects have migrated to using pyproject.toml. If a pyproject.toml file exists, our installer should probably write the dependencies to that file, instead of to requirements.txt.

(In the absence of either of the files, creating a requirements.txt is fine, as creating a pyproject.toml is a whole thing)

  • Installer supports pyproject.toml and uses it when it exists
  • Docs mention pyproject.toml alongside requirements.txt for manual installation

Rename package from `appsignal-beta` to `appsignal`

Plan

  • Publish a v0.2 to appsignal with the latest changes, including the package name change.
  • Manually publish v0.2 to appsignal-beta as well, with the changes above, as well as a change that, on import, prints to standard error, prompting you to change your dependency to the appsignal package. This will be the last appsignal-beta package published.

This provides a way for users to change the package name in their dependencies, without dragging in any other changes (v0.2 on appsignal-beta -> v0.2 on appsignal)

To do

appsignal-beta package has been moved to this repo: https://github.com/appsignal/appsignal-python-beta

First release for Python package

Releasing an alpha version of the package to PyPI would allow us to test the publishing process, give us confidence that pip selects the right package based on the combination platform tag, and it's a necessary step towards getting this to a state where customers can use it.

Note that PyPI has a "test register" that you can upload to and install from. We should probably use that for testing the release process: https://packaging.python.org/en/latest/tutorials/packaging-projects/#uploading-the-distribution-archives

To do

Feel free to split any of these into its own issue. It might also be convenient to do it in the opposite order -- release an alpha to PyPI first, using Hatch, then have that experience inform how Hatch support for Mono should be.

Keep in mind that the way we use Hatch is a bit unusual, so this might be less "implement Python/Hatch support for Mono" and more "implement custom scripts support for Mono"?

  • Release alpha to PyPI test environment
  • Test that the release installs correctly

Report exceptions from `logger.exception()`

Investigate whether we should and can do this.

Python logger context

Python's built-in logging package allows for log messages to include exception information.

If the logging method is called from an exception handler (an except block) with the exc_info=True parameter, then it will report information about the currently handled exception. (A specific exception instance can also be passed as the value for exc_info)

There is also a logger.exception() method, which already sets exc_info=True, and which is meant to be used from within exception handlers.

Possible implementations

We could implement a logging handler which, if an exception is present, reports it to AppSignal (doing the equivalent of send_error() for it)

Would this be useful? Perhaps this would also report exceptions that are not relevant to the user. Users might prefer to have finer control over it by calling appsignal.send_error() manually.

We could patch the logging module (OpenTelemetry contrib does it, to implement some other functionality) to add this to all loggers, or we could provide a handler that allows users to add it to their logger.

Install command

Add an install command to make the installation easier.

To do

  • Add CLI command: install
  • Accept one CLI argument containing the Push API key
  • Add an AppSignal config file, see docs test setup

Python package release tracking issue

Tracking issue for first and subsequent Python package releases.

To do

Stage 1: first alpha test release

  • Add changelog file (no mono required yet)
  • #3
  • #16
  • appsignal/mono#54
  • #29
    • Blocked on desired package name, now available as "appsignal-beta".

Stage 2: Beta testers

Stage 3: 1.0 release

Stage 4: Feature parity

Add demo command

Add a demo command to test AppSignal.

To do

  • Add a CLI command: demo
  • Send an error trace (see other integrations for examples)
  • Send a performance trace (see other integrations for examples)
  • Run the demo command (or the internal code) automatically after appsignal install #18
  • Update the integration-guide page

Implement logging

See if we can implement logging for the Python integration.

To do

  • Research how to implement logging
    • OpenTelemetry logging?
      • Add OpenTelemetry logging support to the agent
    • Agent extension
      • Add extension support to this project
  • Implement Python logging handler
  • Research if other useful logging integrations exist (add more bullet points if so)

Update README.md

Update readme so that the package is titled "AppSignal for Python" and replace:

The AppSignal package collects exceptions and performance data from your Ruby applications and sends it to AppSignal for analysis. Get alerted when an error occurs or an endpoint is responding very slowly.

with:

The AppSignal package collects exceptions and performance data from your Python applications and sends it to AppSignal for analysis. Get alerted when an error occurs or an endpoint is responding very slowly.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.