Giter Site home page Giter Site logo

earhorn's People

Contributors

dependabot[bot] avatar github-actions[bot] avatar jooola avatar renovate[bot] avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

earhorn's Issues

List index out of range uploading to S3

2022-10-11 08:08:50.651 | DEBUG    | earhorn.stream:listen:98 - running command 'ffmpeg -hide_banner -nostats -re -vn -i https://edge.iono.fm/xice/uctradio_live_medium.aac -map 0 -af silencedetect=noise=-60dB:d=2.0 -f null /dev/null -map 0 -map_metadata -1 -f segment -strftime 1 -segment_time 600 -segment_format mp3 -segment_list segments.csv -reset_timestamps 1 incoming/segment.%Y-%m-%d-%H-%M-%S.mp3'
2022-10-11 08:08:50.683 | DEBUG    | earhorn.stream_silence:parse_process_output:85 - starting to parse stdout
Exception in thread Thread-4 (wait_for_segments):
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "/usr/local/lib/python3.10/threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "/opt/venv/lib/python3.10/site-packages/earhorn/stream_archive.py", line 166, in wait_for_segments
    self.ingest_pending_segments()
  File "/opt/venv/lib/python3.10/site-packages/earhorn/stream_archive.py", line 156, in ingest_pending_segments
    self._segment_filepath(segment),
  File "/opt/venv/lib/python3.10/site-packages/earhorn/stream_archive.py", line 109, in _segment_filepath
    parts = segment.name.split(".")[1].split("-")
IndexError: list index out of range

Version: v0.13.0

Reduce dependencies

I used a lot of dependencies that aren't necessarily required, some of them could be removed. For example click can be replaced with argparser, we don't need the complexity of click.

ReadTimeout stopps stream listener without restart

It looks like there was an interruption in the stream that Earhorn was listening to, resulting in the following error. It did not attempt to restart listening.

2022-10-31 08:24:03.540 | DEBUG    | earhorn.stream_archive_s3:ingest_segment:33 - uploading segment incoming/segment.2022-10-31-08-22-33.mp3 to s3://uct-radio-archive-test                                                                   
2022-10-31 08:24:03.796 | DEBUG    | earhorn.stream:listen:112 - command exited with 0    
2022-10-31 08:24:04.684 | INFO     | earhorn.stream:listen:117 - stream listener stopped
Traceback (most recent call last):                                                                                     
  File "/opt/venv/lib/python3.10/site-packages/httpcore/_exceptions.py", line 8, in map_exceptions
    yield                                                                                                              
  File "/opt/venv/lib/python3.10/site-packages/httpcore/backends/sync.py", line 26, in read
    return self._sock.recv(max_bytes)                                                                                  
  File "/usr/local/lib/python3.10/ssl.py", line 1259, in recv
    return self.read(buflen)                                                                                           
  File "/usr/local/lib/python3.10/ssl.py", line 1132, in read
    return self._sslobj.read(len)                                                                                      
TimeoutError: The read operation timed out   
                                                                                                                       
During handling of the above exception, another exception occurred:
                                                                                                                       
Traceback (most recent call last):              
  File "/opt/venv/lib/python3.10/site-packages/httpx/_transports/default.py", line 60, in map_httpcore_exceptions
    yield                          
  File "/opt/venv/lib/python3.10/site-packages/httpx/_transports/default.py", line 218, in handle_request
    resp = self._pool.handle_request(req)
  File "/opt/venv/lib/python3.10/site-packages/httpcore/_sync/connection_pool.py", line 253, in handle_request   
    raise exc                         
  File "/opt/venv/lib/python3.10/site-packages/httpcore/_sync/connection_pool.py", line 237, in handle_request
    response = connection.handle_request(request)          
  File "/opt/venv/lib/python3.10/site-packages/httpcore/_sync/connection.py", line 90, in handle_request                                                                                                                                       
    return self._connection.handle_request(request)
  File "/opt/venv/lib/python3.10/site-packages/httpcore/_sync/http11.py", line 105, in handle_request                                                                                                                                          
    raise exc                                                                                                                                                                                                                                  
  File "/opt/venv/lib/python3.10/site-packages/httpcore/_sync/http11.py", line 84, in handle_request
    ) = self._receive_response_headers(**kwargs)
  File "/opt/venv/lib/python3.10/site-packages/httpcore/_sync/http11.py", line 148, in _receive_response_headers
    event = self._receive_event(timeout=timeout)
  File "/opt/venv/lib/python3.10/site-packages/httpcore/_sync/http11.py", line 177, in _receive_event
    data = self._network_stream.read(
  File "/opt/venv/lib/python3.10/site-packages/httpcore/backends/sync.py", line 24, in read
    with map_exceptions(exc_map):
  File "/usr/local/lib/python3.10/contextlib.py", line 153, in __exit__
    self.gen.throw(typ, value, traceback)
  File "/opt/venv/lib/python3.10/site-packages/httpcore/_exceptions.py", line 12, in map_exceptions
    raise to_exc(exc)
httpcore.ReadTimeout: The read operation timed out

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/opt/venv/bin/earhorn", line 8, in <module>
    sys.exit(cli())
  File "/opt/venv/lib/python3.10/site-packages/click/core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
  File "/opt/venv/lib/python3.10/site-packages/click/core.py", line 1055, in main
    rv = self.invoke(ctx)
  File "/opt/venv/lib/python3.10/site-packages/click/core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/opt/venv/lib/python3.10/site-packages/click/core.py", line 760, in invoke
    return __callback(*args, **kwargs)
  File "/opt/venv/lib/python3.10/site-packages/earhorn/main.py", line 251, in cli
    stream_listener.run_forever()
  File "/opt/venv/lib/python3.10/site-packages/earhorn/stream.py", line 83, in run_forever
    self.check_stream()
  File "/opt/venv/lib/python3.10/site-packages/earhorn/stream.py", line 69, in check_stream
    with self._client.stream("GET", self.stream_url) as response:
  File "/usr/local/lib/python3.10/contextlib.py", line 135, in __enter__
    return next(self.gen)
  File "/opt/venv/lib/python3.10/site-packages/httpx/_client.py", line 858, in stream
    response = self.send(
  File "/opt/venv/lib/python3.10/site-packages/httpx/_client.py", line 902, in send
    response = self._send_handling_auth(
  File "/opt/venv/lib/python3.10/site-packages/httpx/_client.py", line 930, in _send_handling_auth
    response = self._send_handling_redirects(
  File "/opt/venv/lib/python3.10/site-packages/httpx/_client.py", line 967, in _send_handling_redirects
    response = self._send_single_request(request)
  File "/opt/venv/lib/python3.10/site-packages/httpx/_client.py", line 1003, in _send_single_request
    response = transport.handle_request(request)
  File "/opt/venv/lib/python3.10/site-packages/httpx/_transports/default.py", line 217, in handle_request
    with map_httpcore_exceptions():
  File "/usr/local/lib/python3.10/contextlib.py", line 153, in __exit__
    self.gen.throw(typ, value, traceback)
  File "/opt/venv/lib/python3.10/site-packages/httpx/_transports/default.py", line 77, in map_httpcore_exceptions
    raise mapped_exc(message) from exc
httpx.ReadTimeout: The read operation timed out

This is running v0.13.0 via Docker

High and unstable CPU / Mem usage for the archive on bullseye

The CPU and memory usage of the archiver is unstable and sometimes uses the entire virtual machine resources. This happens on bullseye, while it wasn't an issue on buster.

I previously added a way to set the -re flag and -copy flag to prevent the archiver to eat too much resources, but it seems we have another issue here.

Storing state of the monitored stream

The app isn't storing the state of the stream in anyway, on changes an event is fired and handled by some user provided hook or integrations.

A few features might need to have the state stored somewhere in order to perform advanced actions, for example Only fire a silence event if the silence exceeds 10 seconds. This would be doable without storing the state of the stream but it seem to add too much complexity.

How the state is stored is also a good question, having a sqlitedb might be overkill as a simple key/value store would be enough, but having a dedicated key/value store as dependency might also be overkill (Redis?).

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Awaiting Schedule

These updates are awaiting their schedule. Click on a checkbox to get an update now.

  • chore(deps): lock file maintenance

Open

These updates have all been created already. Click a checkbox below to force a retry/rebase of any.

Detected dependencies

docker-compose
docker-compose.override.yml
  • savonet/liquidsoap v2.2.5
docker-compose.yml
dockerfile
Dockerfile
  • python 3.12-alpine
  • python 3.12-alpine
github-actions
.github/workflows/actions/python/action.yml
  • actions/setup-python v5
  • actions/cache v4
.github/workflows/ci.yml
  • actions/checkout v4
  • pre-commit/action v3.0.1
  • actions/checkout v4
  • actions/checkout v4
  • actions/checkout v4
  • actions/checkout v4
  • docker/setup-buildx-action v3
  • docker/login-action v3
  • docker/login-action v3
  • docker/metadata-action v5
  • docker/build-push-action v5
.github/workflows/release-please.yml
  • google-github-actions/release-please-action v4
pep621
pyproject.toml
poetry
pyproject.toml
  • python >=3.9,<4.0
  • click ^8.0.1
  • httpx ^0.27.0
  • lxml ^5.0.0
  • prometheus-client ^0.20.0
  • pydantic ^2.0.0
  • typing-extensions ^4.2.0
  • python-dotenv ^1.0.0
  • boto3 ^1.24.8
  • sentry-sdk ^1.14.0
  • bandit ^1.7.4
  • black ^24.2
  • flake8 ^7.0.0
  • isort ^5.9.3
  • lxml-stubs ^0.5.0
  • moto ^5.0.0
  • mypy ^1.0
  • pylint ^3.0.0
  • pytest ^8.0.0
  • pytest-cov ^5.0.0
  • pytest-httpx ^0.30.0
  • pytest-xdist ^3.0.0
  • testcontainers ^4.0.0
  • types-boto3 ^1.0.2
pre-commit
.pre-commit-config.yaml
  • pre-commit/pre-commit-hooks v4.6.0
  • pre-commit/mirrors-prettier v3.1.0
  • codespell-project/codespell v2.2.6
  • asottile/pyupgrade v3.15.2
  • python-poetry/poetry 1.8.3

  • Check this box to trigger a request for Renovate to run again on this repository

Integer rounding in validate_silence_duration

The logs are full of the following error:

2022-10-04 10:21:41.218 | ERROR    | earhorn.stream_silence:validate_silence_duration:49 - computed duration '2' differs from ffmpeg duration '2.2'

It seems they are all occur when ffmpeg reports a decimal silence, while the calculated silence is always reported as an integer

Monitoring tool integrations

Right now the state of the stream being monitored is handled by triggering a hook (script) provided by the user.

I don't know if this tool should handle every specific monitoring tool out there, but I have a few options I would like to support (mainly because I am using them). How these integration should be added to this tool is to be defined, plugin based ? Ship different integration based on users request/proposal ?

Also, the current state of the stream isn't currently stored in the app, some of the tools below might require some kind of storage to keep track of the state of the stream.

For now I see the following tools being integrated:

  • User provided script (hook)
  • Web hooks ? (I would not use it but maybe it is a good idea ?)
  • Prometheus (expose http metrics endpoint)
  • Zabbix (either send values using zabbix_sender or provide a easy integration and templates for an agent to access the state.)

Segments.csv not found after stream listener restart

It looks like, after a network disruption, the stream listener restarted, but the stream archive couldn't find the new segments.csv file. I would guess because it wasn't created yet. The system continued to run without any additional errors, but no audio was uploaded to S3

2022-09-14 16:39:12.033 | DEBUG    | earhorn.stream_archive_s3:ingest_segment:24 - uploading segment /tmp/earhorn-0s3s3n4h/segment.2022-09-14-16-30-04.mp3 to s3://uct-radio-archive                                                           
2022-09-14 16:39:12.415 | DEBUG    | earhorn.stream:listen:107 - command exited with 0                                                                                                                                                         
2022-09-14 16:39:17.124 | INFO     | earhorn.stream:listen:112 - stream listener stopped                                                                                                                                                       
2022-09-14 16:39:21.475 | ERROR    | earhorn.stream:check_stream:77 - could not stream from 'https://edge.iono.fm/xice/uctradio_live_medium.aac'                                                                                               
2022-09-14 16:39:21.475 | DEBUG    | earhorn.stream:check_stream:78 - Client error '404 Not Found' for url 'https://edge.iono.fm/xice/uctradio_live_medium.aac'                                       
For more information check: https://httpstatuses.com/404                                                                                                                                                                                       
2022-09-14 16:39:21.476 | DEBUG    | earhorn.event:run:91 - when=datetime.datetime(2022, 9, 14, 16, 39, 21, 476112) kind='down' name='status'                                                       
2022-09-14 16:39:30.965 | ERROR    | earhorn.stream:check_stream:77 - could not stream from 'https://edge.iono.fm/xice/uctradio_live_medium.aac'                                                                                               
2022-09-14 16:39:30.965 | DEBUG    | earhorn.stream:check_stream:78 - Client error '404 Not Found' for url 'https://edge.iono.fm/xice/uctradio_live_medium.aac'                                     
For more information check: https://httpstatuses.com/404                                                                                                                                                                                       
2022-09-14 16:39:30.965 | DEBUG    | earhorn.event:run:91 - when=datetime.datetime(2022, 9, 14, 16, 39, 30, 965519) kind='down' name='status'                                                                                                  
2022-09-14 16:39:36.638 | DEBUG    | earhorn.event:run:91 - when=datetime.datetime(2022, 9, 14, 16, 39, 36, 638086) kind='up' name='status'                                                        
2022-09-14 16:39:36.638 | DEBUG    | earhorn.event:run:91 - when=datetime.datetime(2022, 9, 14, 16, 39, 36, 638157) kind='end' seconds=Decimal('0.0') duration=None name='silence'                  
2022-09-14 16:39:36.639 | INFO     | earhorn.stream:listen:90 - starting stream listener                                                                                                                                                       
2022-09-14 16:39:36.639 | DEBUG    | earhorn.stream:listen:93 - running command 'ffmpeg -hide_banner -nostats -re -vn -i https://edge.iono.fm/xice/uctradio_live_medium.aac -map 0 -af silencedetect=noise=-60dB:d=2 -f null /dev/null -map 0 -
map_metadata -1 -f segment -strftime 1 -segment_time 600 -segment_format mp3 -segment_list /tmp/earhorn-0s3s3n4h/segments.csv -reset_timestamps 1 /tmp/earhorn-0s3s3n4h/segment.%Y-%m-%d-%H-%M-%S.mp3'
2022-09-14 16:39:36.642 | DEBUG    | earhorn.stream_silence:parse_process_output:82 - starting to parse stdout                                                                                                                                 
Exception in thread Thread-6 (wait_for_segments):                                                                                                                                                                                              
Traceback (most recent call last):                                                                                                                                                                                                             
  File "/usr/local/lib/python3.10/threading.py", line 1016, in _bootstrap_inner                                                                                                                                                                
    self.run()                                                                                                                                                                                                                                 
  File "/usr/local/lib/python3.10/threading.py", line 953, in run                                                                                                                                                                              
    self._target(*self._args, **self._kwargs)                                                                                                                                                                                                  
  File "/opt/venv/lib/python3.10/site-packages/earhorn/stream_archive.py", line 140, in wait_for_segments                                                                                                                                      
    with self.tmp_segment_list.open(                                                                                                                                                                                                           
  File "/usr/local/lib/python3.10/pathlib.py", line 1119, in open                                                                                                                                                                              
    return self._accessor.open(self, mode, buffering, encoding, errors,                                                                                                                                                                        
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/earhorn-0s3s3n4h/segments.csv'

Don't fail silently when icecast mounpoints are down

When icecast mount points drops, Earhorn does not detect that and stops.

Auto-restarting the earhorn service didn't succeed because the exception from the sub thread didn't make the whole service to fail, but simply to stop.

Add icecast server down metric

We currently have a "stream is down" metric that check the monitored stream is up, but we don't have a metric that check if the icecast server is running.

There should be a difference between the server and the stream.

Earhorn gets rejected if icecast reaches its client limit

When icecast reaches its client limits, any further connection is reject. For listeners it is expected, but we want to keep track the stats during a heavy workload, so earhon should not get rejected.

We need to find a way to use/reuse the same connection.

Maybe #94 is a smart way to go around the client limits, as stats client might have a dedicated counter, and not be counted in the client count.

Fix different silence duration between ffmpeg and earhorn

We compute the silence duration in both ffmpeg and earhorn and log when we see a too big difference between the 2.

The computation on the python side could probably be improved with the decimal module. We migth also have some rounding issues.

Some logs:

2022-08-16 08:35:46.212 | ERROR    | earhorn.stream_silence:validate_silence_duration:48 - computed duration '3.0' differs from ffmpeg duration '2.94234'
2022-08-16 08:35:46.212 | INFO     | earhorn.stream_silence:parse_process_output:90 - silence end: 2022-08-16 08:35:46.212178
2022-08-16 08:35:46.212 | DEBUG    | earhorn.event:run:90 - silence: {"when": "2022-08-16T08:35:46.212178", "kind": "end", "seconds": 32756.4, "duration": 2.94234, "name": "silence"}

2022-08-16 08:50:56.694 | INFO     | earhorn.stream_silence:parse_process_output:90 - silence start: 2022-08-16 08:50:56.694790
2022-08-16 08:50:56.696 | DEBUG    | earhorn.event:run:90 - silence: {"when": "2022-08-16T08:50:56.694790", "kind": "start", "seconds": 33664.4, "duration": null, "name": "silence"}

2022-08-16 08:50:56.811 | ERROR    | earhorn.stream_silence:validate_silence_duration:48 - computed duration '2.1999999999970896' differs from ffmpeg duration '2.21741'
2022-08-16 08:50:56.812 | INFO     | earhorn.stream_silence:parse_process_output:90 - silence end: 2022-08-16 08:50:56.811735
2022-08-16 08:50:56.812 | DEBUG    | earhorn.event:run:90 - silence: {"when": "2022-08-16T08:50:56.811735", "kind": "end", "seconds": 33666.6, "duration": 2.21741, "name": "silence"}

2022-08-16 08:54:45.953 | INFO     | earhorn.stream_silence:parse_process_output:90 - silence start: 2022-08-16 08:54:45.953585
2022-08-16 08:54:45.953 | DEBUG    | earhorn.event:run:90 - silence: {"when": "2022-08-16T08:54:45.953585", "kind": "start", "seconds": 33893.8, "duration": null, "name": "silence"}

2022-08-16 09:24:31.719 | ERROR    | earhorn.stream_silence:validate_silence_duration:48 - computed duration '5.400000000001455' differs from ffmpeg duration '5.42009'
2022-08-16 09:24:31.720 | INFO     | earhorn.stream_silence:parse_process_output:90 - silence end: 2022-08-16 09:24:31.719796
2022-08-16 09:24:31.720 | DEBUG    | earhorn.event:run:90 - silence: {"when": "2022-08-16T09:24:31.719796", "kind": "end", "seconds": 35681.4, "duration": 5.42009, "name": "silence"}

S3 Upload failure doesn't get re-queued

It seems that this exception isn't handled properly

2022-10-07 08:19:14.900 | DEBUG    | earhorn.stream_archive_s3:ingest_segment:24 - uploading segment /tmp/earhorn-9z6xj4uy/segment.2022-10-07-08-09-14.mp3 to s3://uct-radio-archive           
Exception in thread Thread-4 (wait_for_segments):                                                                                                                                              
Traceback (most recent call last):       
  File "/opt/venv/lib/python3.10/site-packages/urllib3/connection.py", line 174, in _new_conn                                                                                                  
    conn = connection.create_connection(       
  File "/opt/venv/lib/python3.10/site-packages/urllib3/util/connection.py", line 95, in create_connection                                                                                      
    raise err                                  
  File "/opt/venv/lib/python3.10/site-packages/urllib3/util/connection.py", line 85, in create_connection                                                                                      
    sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/opt/venv/lib/python3.10/site-packages/botocore/httpsession.py", line 448, in send
    urllib_response = conn.urlopen(
  File "/opt/venv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 787, in urlopen 
    retries = retries.increment(
  File "/opt/venv/lib/python3.10/site-packages/urllib3/util/retry.py", line 525, in increment
    raise six.reraise(type(error), error, _stacktrace)
  File "/opt/venv/lib/python3.10/site-packages/urllib3/packages/six.py", line 770, in reraise
    raise value
  File "/opt/venv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 703, in urlopen 
    httplib_response = self._make_request(
  File "/opt/venv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 398, in _make_request
    conn.request(method, url, **httplib_request_kw)
  File "/opt/venv/lib/python3.10/site-packages/urllib3/connection.py", line 239, in request
    super(HTTPConnection, self).request(method, url, body=body, headers=headers)
  File "/usr/local/lib/python3.10/http/client.py", line 1282, in request
    self._send_request(method, url, body, headers, encode_chunked)
  File "/opt/venv/lib/python3.10/site-packages/botocore/awsrequest.py", line 94, in _send_request
    rval = super()._send_request(
  File "/usr/local/lib/python3.10/http/client.py", line 1328, in _send_request
    self.endheaders(body, encode_chunked=encode_chunked)
  File "/usr/local/lib/python3.10/http/client.py", line 1277, in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
  File "/opt/venv/lib/python3.10/site-packages/botocore/awsrequest.py", line 123, in _send_output
    self.send(msg)
  File "/opt/venv/lib/python3.10/site-packages/botocore/awsrequest.py", line 218, in send
    return super().send(str)
  File "/usr/local/lib/python3.10/http/client.py", line 975, in send
    self.connect()
  File "/opt/venv/lib/python3.10/site-packages/urllib3/connection.py", line 205, in connect
    conn = self._new_conn()
  File "/opt/venv/lib/python3.10/site-packages/urllib3/connection.py", line 186, in _new_conn
    raise NewConnectionError(
urllib3.exceptions.NewConnectionError: <botocore.awsrequest.AWSHTTPConnection object at 0x7f381c668460>: Failed to establish a new connection: [Errno 111] Connection refused

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "/usr/local/lib/python3.10/threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "/opt/venv/lib/python3.10/site-packages/earhorn/stream_archive.py", line 147, in wait_for_segments
    self.storage.ingest_segment(
  File "/opt/venv/lib/python3.10/site-packages/earhorn/stream_archive_s3.py", line 26, in ingest_segment
    self._client.upload_file(
  File "/opt/venv/lib/python3.10/site-packages/boto3/s3/inject.py", line 143, in upload_file
    return transfer.upload_file(
  File "/opt/venv/lib/python3.10/site-packages/boto3/s3/transfer.py", line 288, in upload_file
    future.result()
  File "/opt/venv/lib/python3.10/site-packages/s3transfer/futures.py", line 103, in result
    return self._coordinator.result()
  File "/opt/venv/lib/python3.10/site-packages/s3transfer/futures.py", line 266, in result
    raise self._exception
  File "/opt/venv/lib/python3.10/site-packages/s3transfer/tasks.py", line 139, in __call__
    return self._execute_main(kwargs)
  File "/opt/venv/lib/python3.10/site-packages/s3transfer/tasks.py", line 162, in _execute_main 
    return_value = self._main(**kwargs)
  File "/opt/venv/lib/python3.10/site-packages/s3transfer/tasks.py", line 348, in _main
    response = client.create_multipart_upload(
  File "/opt/venv/lib/python3.10/site-packages/botocore/client.py", line 514, in _api_call
    return self._make_api_call(operation_name, kwargs)
  File "/opt/venv/lib/python3.10/site-packages/botocore/client.py", line 921, in _make_api_call 
    http, parsed_response = self._make_request( 
  File "/opt/venv/lib/python3.10/site-packages/botocore/client.py", line 944, in _make_request
    return self._endpoint.make_request(operation_model, request_dict)
  File "/opt/venv/lib/python3.10/site-packages/botocore/endpoint.py", line 119, in make_request 
    return self._send_request(request_dict, operation_model)
  File "/opt/venv/lib/python3.10/site-packages/botocore/endpoint.py", line 202, in _send_request
    while self._needs_retry(
  File "/opt/venv/lib/python3.10/site-packages/botocore/endpoint.py", line 354, in _needs_retry 
    responses = self._event_emitter.emit(
  File "/opt/venv/lib/python3.10/site-packages/botocore/hooks.py", line 412, in emit
    return self._emitter.emit(aliased_event_name, **kwargs)
  File "/opt/venv/lib/python3.10/site-packages/botocore/hooks.py", line 256, in emit
    return self._emit(event_name, kwargs)
  File "/opt/venv/lib/python3.10/site-packages/botocore/hooks.py", line 239, in _emit
    response = handler(**kwargs)
  File "/opt/venv/lib/python3.10/site-packages/botocore/retryhandler.py", line 207, in __call__ 
    if self._checker(**checker_kwargs):
  File "/opt/venv/lib/python3.10/site-packages/botocore/retryhandler.py", line 284, in __call__ 
    should_retry = self._should_retry(
  File "/opt/venv/lib/python3.10/site-packages/botocore/retryhandler.py", line 320, in _should_retry
    return self._checker(attempt_number, response, caught_exception)
  File "/opt/venv/lib/python3.10/site-packages/botocore/retryhandler.py", line 363, in __call__ 
    checker_response = checker(
  File "/opt/venv/lib/python3.10/site-packages/botocore/retryhandler.py", line 247, in __call__ 
    return self._check_caught_exception(
  File "/opt/venv/lib/python3.10/site-packages/botocore/retryhandler.py", line 416, in _check_caught_exception
    raise caught_exception
  File "/opt/venv/lib/python3.10/site-packages/botocore/endpoint.py", line 281, in _do_get_response
    http_response = self._send(request)
  File "/opt/venv/lib/python3.10/site-packages/botocore/endpoint.py", line 377, in _send
    return self.http_session.send(request)
  File "/opt/venv/lib/python3.10/site-packages/botocore/httpsession.py", line 477, in send
    raise EndpointConnectionError(endpoint_url=request.url, error=e)
botocore.exceptions.EndpointConnectionError: Could not connect to the endpoint URL: "<endpoint>/2022/10/07_08.09.14.mp3?uploads"

Allow archive to S3

It would be great to be able to archive to an S3 compatible bucket, as archive storage can get large. My use-case would be running Earhorn in the cloud, monitoring the stream with the archive accessible for production staff to pull logs, proofs, create stings, etc. This is much more cost effective when using object storage instead of block storage for large volumes of data.

Pipe ffmpeg calls to reduce network/CPU usage

I consider only having a single ffmpeg call that download the stream and pipe the output to both the archiver and the silence listener filter.

Currently we have 2 extra listeners in the icecast stats because we have 2 calls to the same stream.

Add icecast statistics, becoming an icecast exporter

I will need to add metrics for the statistics provided by icecast. Doing so, this tool might become a entire icecast exporter with extra tools such as silence detection and archiving.

The important stats for me are the listener count, and mount point details;

Remove custom script hook

I don't know if the custom script hook will really be used, the prometheus metrics are quiet handy.

I will probably remove it to simplify the code.

Better configuration handling

Having a advanced CLI is probably not the best way to configure this app, specially that feature will land and require more and more settings over time.

Maybe using a environment variables based config manager would be a better idea. A file based configuration could also be a good idea.

Monitor multiple streams

As an exporter I would expect a 1:1 relation to an icecast server, but maybe it's worth allowing to monitor multiple icecasts stream/stats.

I don't know how much complexity it would add to allow multiple stats endpoints.

@paddatrapper What use cases would you have ?

ffmpeg not restarting after crash

Describe the bug

After ffmpeg exists 0, it is restarted, but exits 1. In this case, there is no attempt to restart again and earhorn continues to run without actually running ffmpeg. Logs are:

2023-03-12 14:06:38,703 | INFO     | earhorn.stream:listen:123 - ffmpeg command exited with 0
2023-03-12 14:06:46,945 | INFO     | earhorn.stream:listen:128 - stream listener stopped
2023-03-12 14:06:47,747 | ERROR    | earhorn.stream:check_stream:83 - could not stream from 'https://edge.iono.fm/xice/uctradio_live_medium.aac'
2023-03-12 14:06:53,229 | ERROR    | earhorn.stream:check_stream:83 - could not stream from 'https://edge.iono.fm/xice/uctradio_live_medium.aac'
2023-03-12 14:06:59,205 | INFO     | earhorn.stream:listen:96 - starting stream listener
2023-03-12 14:06:59,205 | INFO     | earhorn.stream:listen:101 - preparing ArchiveHandler 
2023-03-12 14:06:59,207 | INFO     | earhorn.stream_silence:parse_process_output:66 - starting to parse stdout
2023-03-12 14:07:00,377 | INFO     | earhorn.stream:listen:123 - ffmpeg command exited with 1

Environment

  • Docker image: ghcr.io/jooola/earhorn:v0.17.0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.