Giter Site home page Giter Site logo

abletonosc's Introduction

ideoforms / Daniel John Jones

I am a UK-based artist-researcher, interested in algorithmic composition, sonification, systems music, sound installations, and spatial audio.

As part of my practice, I develop a number of open-source frameworks that are focused on generating and interacting with sound, primarily via Python.

  • ๐ŸŽต isobar: generate and manipulate musical patterns and control sequences, outputting MIDI, OSC, or MIDI files
  • ๐ŸŽ› signalflow: real-time sound synthesis, processing and DSP
  • ๐ŸŽน AbletonOSC: enables control of Ableton Live with OpenSoundControl
  • ๐Ÿ”Œ pylive: connects to AbletonOSC to control Ableton Live directly from Python scripts
  • ๐Ÿ”Š python-supercollider: control the SuperCollider synthesis engine from Python

abletonosc's People

Contributors

axeldelafosse avatar billmoser avatar capturcus avatar ideoforms avatar marcobn avatar markmarijnissen avatar mcparaf avatar steeltrack avatar stevmills avatar strongbearceo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

abletonosc's Issues

Get track_data extensions

  • /live/song/get/track_data should send data in multiple packets if one is too small
  • /live/song/get/track_data should support -1 to denote max track

AbletonOSC only seems to listen to Localhost?

Hoi,

I have been using AbletonOSC (receiver) and Unity3D (sender) for half a year on a single pc without any problems, always sending to localhost (127.0.0.1). I am trying to split the OSC sender software (Unity3D application) and Ableton to two different PCs in a network, but AbletonOSC does not pick up the messages. With a 3rd software (Wireshark) I have been monitoring the incoming network traffic on the Ableton PC and the OSC messages arrive on the correct port but don't get picked up by AbletonOSC. Is there any way to have the plugin listen to other IPs than localhost or am I doing something wrong?

Any help is greatly appreciated. Thanks a lot for that awesome plugin btw!

Greetings,
JM

Request: Get audio output value (complementary to getting the volume set in ableton)

I am writing a Python tool that does automatic camera switching for a live set, it requires to be able to read the audio inputs in specific tracks to know what instruments are being played.

I linked my script to ableton with python-osc and abletonosc, but then discovered get/volume returns the value of the volume set in ableton, not the value of the audio

Help: How do i test with OSC?

I tried a few OST tester softwares, but i seem to not be able to connect with ableton :(
https://github.com/ETCLabs/OSCTester

I tried all settings!
I tried connecting to 0.0.0.0 on port 11000, also tried host 127.0.0.1 and port 11000, i also tried combinations with port 11001.
You get the idea! I'm not able to connect and test sadly :(

I'm not trying to do something in particular, i just wanted to test it out.

Please halp โค๏ธ

Add support for setting gain values in dB

Currently, all volume levels are floats (0..1). The API should ideally expose the ability to set gain values in dB, which will require an internal mapping from db to linear (as provided by @marcobn), and vice versa.

Track API cannot get float value

Hi!

When I try to get the volume (/live/track/get/volume) or panning (/live/track/get/panning) of a track, I get the following error in the log file.

(2022-09-19 22:07:49,875) [INFO] AbletonOSC: Error handling message: Traceback (most recent call last):
  File "PATH\AbletonOSC\abletonosc\osc_server.py", line 71, in process
    self.send(message.address, rv)
  File "PATH\AbletonOSC\abletonosc\osc_server.py", line 47, in send
    for param in params:
TypeError: 'float' object is not iterable

I also get similar error when I try to get the send value (/live/track/get/send):

(2022-09-19 22:15:22,189) [INFO] AbletonOSC: Error handling message: Traceback (most recent call last):
  File "PATH\AbletonOSC\abletonosc\osc_server.py", line 71, in process
    self.send(message.address, rv)
  File "PATH\AbletonOSC\abletonosc\osc_server.py", line 47, in send
    for param in params:
TypeError: 'DeviceParameter' object is not iterable

Weird Null Error

I made a thing in touchdesigner to either speed up or slow down tracks across multiple BPMs to ensure the show is in sync if I change a song's bpm, however whenever I send the bpm value to AbletonOSC (at /live/song/set/tempo) I get this error message:

(2023-01-20 21:47:23,734) [INFO] Setting property for song: tempo (new value 90.0)
(2023-01-20 21:47:23,734) [ERROR] AbletonOSC: Error handling OSC message: Python argument types in
    None.None(Song, str)
did not match C++ signature:
    None(class TPyHandle<class ASong>, float)
(2023-01-20 21:47:23,735) [WARNING] AbletonOSC: Traceback (most recent call last):
  File "C:\Users\scripthead\Documents\Ableton\User Library\Remote Scripts\AbletonOSC\abletonosc\osc_server.py", line 108, in process
    rv = callback(message.params)
  File "C:\Users\scripthead\Documents\Ableton\User Library\Remote Scripts\AbletonOSC\abletonosc\handler.py", line 32, in _set_property
    setattr(target, prop, params[0])
Boost.Python.ArgumentError: Python argument types in
    None.None(Song, str)
did not match C++ signature:
    None(class TPyHandle<class ASong>, float)

I was running Ableton 11.2.6

Import errors on MacOS

I'm having trouble running the code outside of Ableton. The python compiler embedded in Ableton seems to load the code just fine, but trouble is when i try to fire a pytest or run any modules just by python [module]

I'm using pyenv and virtualenv, so it's a completely clean and isolated environment.

The problem seems to be import errors

I get a bunch of errors very similar to the one i dumped down below. It seems to be related to relative imports.

Terminal output

python version: 3.11.0
pytest version: 7.2.0

MacOS Catalina 10.15.7


_____________________________________ ERROR collecting tests/test_track.py _____________________________________
ImportError while importing test module '/Users/ferry/dev/seed/remote/AbletonOSC/tests/test_track.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
../../../../.pyenv/versions/3.11.0/lib/python3.11/importlib/__init__.py:126: in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
tests/__init__.py:4: in <module>
    from ..client import AbletonOSCClient, TICK_DURATION
client/__init__.py:1: in <module>
    from .client import AbletonOSCClient, TICK_DURATION
client/client.py:3: in <module>
    from pythonosc.udp_client import SimpleUDPClient
E   ModuleNotFoundError: No module named 'pythonosc'

Clip position

Hello!
Thanks a lot for porting the former LiveOSC to Ableton Live 11!
I know it's work in progress, so don't expect everything to be in place.
I would like to ask for one listener though that I would consider essential.
From what I understand it should be
/live/clip/position (int track) (int clip) (float position) (float length) (float loop_start) (float loop_end)
to see how far a playing clip has progressed in its total length or loop.

Oh, and of course related to this:
Is there a listener or a query for the play state of clips?

Joern

Getter for time signature

I'm working on something where I would like to keep count of the measures. However, there seems to be no listener for that, so I figured I'd build my own basing it off of /live/song/beat. The thing is, I don't know the time signature of the song and can't just assume it is 4/4. I found the song properties signature_denominator and signature_numerator here: https://docs.cycling74.com/max8/vignettes/live_object_model#Song.
However, I can't get them (yet), can I? I tried listening for them with /live/song/start_listen/signature_denominator but I get an "Unknown OSC address" error. I also tried get instead of start_listen, but had the same result.

Am I overlooking something here?

Track API properties not documented in ReadMe

/live/track/get/clips/color

get&set
arm, color_index, current_monitoring_state, fold_state

are working, but missing in ReadMe
Seems all the above track properties and more also have undocumented listeners (only color, name, arm tested).

Are there undocumented listeners for clips implemented already, too?
I'm not able to "find" listeners in track.py, so I don't know what to look for in clip.py.

Log entries do not reflect origin of detected changes.

Log entries do not reflect origin of detected changes.

(2022-11-29 02:14:10,237) [INFO] Adding listener for track (0,), property: color
(2022-11-29 02:14:10,239) [INFO] Adding listener for track (1,), property: color
(2022-11-29 02:14:10,240) [INFO] Adding listener for track (2,), property: color
(2022-11-29 02:14:10,243) [INFO] Adding listener for track (0,), property: name
(2022-11-29 02:14:10,244) [INFO] Adding listener for track (1,), property: name
(2022-11-29 02:14:10,245) [INFO] Adding listener for track (2,), property: name
(2022-11-29 02:14:14,537) [INFO] Property color changed: 8912743
(2022-11-29 02:14:26,101) [INFO] Property name changed: 2-Audio
(2022-11-29 02:14:40,191) [INFO] Property name changed: 3-Audio

Response parameters missing in ReadMe

Response parameters are missing in the ReadMe for
live/device/get/name
live/device/get/class_name
live/device/get/type

(already reported by mail)

Define a list of messages to fire when a set is loaded

This is my first contact with AbletonOSC. Excellent initiative.

One question: Do some of the files allow you to add a list of actions to be triggered when Ableton is loaded?

Currently, when we reload or open a project in Ableton we get the message "AbletonOSC: Listening for OSC on port 11000".

It would be very interesting to be able to send, for example, messages like:
/live/song/get/metronome
/live/track/get/clips/name
/live/track/get/color
/live/device/get/name
(among others)

I'm currently a Clyphx Pro user who has a folder called "user_actions". Inside it is possible to create py files to send OSC messages when live is started. For example:

Nome do arquivo: listener_track.py

from ClyphX_Pro.clyphx_pro.UserActionsBase import UserActionsBase

TOCA_MAX_TRACKS = 8

class listener_track(UserActionsBase):

	def create_actions(self):
		
		self.add_global_action('listener_track', self.listener_track)

	def listener_track(self, *a):
		add_func = self.song().add_tracks_listener(self.atualiza)

	def atualiza(self):
		amount_of_tracks = len(self.song().tracks)
		
		for tr_nums in range(0, TOCA_MAX_TRACKS):
			if (tr_nums < amount_of_tracks):

				tr_colore = hex(self.song().tracks[tr_nums].color).lstrip('0x')
				tr_named = self.song().tracks[tr_nums].name
					
			else:

				tr_colore = "rgba(109,181,253,0)"
				tr_named = " "

			self.canonical_parent.clyphx_pro_component.trigger_action_list('OSC str /track_name_%s "%s"' % (tr_nums, tr_named))
			self.canonical_parent.clyphx_pro_component.trigger_action_list('OSC str /color_address%s "%s"' % (tr_nums, tr_colore))

I'm not a python programmer (this code should even have sectors that need refactoring) but it works for me. That is, I can get the names and colors of each track (or an "empty" when they don't exist) as soon as I start my live set.

Yours sincerely!

Request: The red box

The red box is an important tool when remote controlling the Live session view.
It's part of the remote scripts for all grid controllers like Launchpad, Push, APC, ...
I assume you are familiar with it, if not:
https://remotify.io/sites/default/files/pre-built-scripts/kuala/red-box.gif

The red box is an API feature, functions are de/activate, size x/y and move up/down/left/right.
In the example above it is 4x4, with a Novation Launchpad for example it would be 8x8 and show
which 8x8 clips are controlled by the Launchpad. If you move the box by 8 steps the Launchpad
shows and controls 8x8 other clips.

See it in action here:
https://www.youtube.com/watch?v=3aX-7jmYJJU

It would be nice for AbletonOSC, also to limit the the number of active listeners as you only have to
watch tracks and clips that are inside the box at the moment. AbletonOSC could only provide the mentioned
visual functionality of the box and leave the handling of the listeners to the client OR assist the
client by adjusting some things internally when the box is moved.

Send me a message if you want to go into details.

Changes in /live/track/get/clips/name

I started using AbletonOSC from version before date 2022-12-17.

Taking as an example track 0 (with empty clips), with this new version, the message /live/track/get/clips/name is returning:

[0, null, null, null, null, null, null, null, null]

In the previous version, the same message returned:

[null, null, null, null, null, null, null, null, null]

PS: I'm using the Open Stage Control server app

Track type

Is there a way to find out if a track is midi / audio (/ send / master) ?
Couldn't find anything in AbletonOSC, but neither in the Live API - seems there is no 'track type'
property. Strange as it seems to be of interest to know which calls you can make to a certain track.
All I could find was "has midi input/output" and "has audio input/output" which will help to conclude
the desired information, but doesn't seem very elegant...

Song API methods not documented in ReadMe

I found some undocumented methods in song.py and tested them. They work, but are not in the ReadMe.

jump_by (float)
jump_to_prev_cue
jump_to_next_cue
duplicate_scene (int)
duplicate_track (int)
tap_tempo
trigger_session_record

I also found the property start_time in song.py, but this does not work and seems to be a clip - not song - property.

Handling of all active listeners

It's sometimes hard to keep an overview of all started listeners and it gets more difficult with the new wildcard *.

Would it be possible to get something like /live/get/allactivelisteners and /live/stop/allactivelisteners ?
Probably this is only for development purposes and may be too much work to implement as listeners "belong" to different .py files, right?

Documentation of device type numbers

live/device/get/type works, but the results of my short test do not match the values given in the ReadMe.
I get 1 for instruments, 2 for audio fx and 4 for midi fx.
Couldn't find any API documentation listing these values to check them.

(already reported by mail)

Tolerate floats?

Request from lengwenings: Could AbletonOSC tolerate floats as IDs?

TouchOSC scripting has two ways of sending OSC messages - simple and complex.
Simple would be enough for AbletonOSC, but when using the simple way TouchOSC 
auto-converts all numbers to floats which AbletonOSC does not tolerate.

This means you have to script every OSC message complex:

sendOSC( {'/live/set/clip/name', { {tag = 'i', value = 0 },  {tag = 'i', value = 2 } ,  {tag = 's', value = "test" } } } )

instead of simple:

sendOSC( '/live/set/clip/name', 0, 2, "test" )

Request: add instrument to track

I am writing a Python Cli tool that lets me select a few chords and sends them to Live using AbletonOSC. It works fine but the more I think about this the more I want this to become a kind of template creator where I can quickly create jam tracks specifying only the chord progression.

So my question, is there a way to add instruments to tracks using AbletonOSC. I tried many things, but I guess it is not yet implemented. Is this something that is planned to come?

btw I really appreciate this project, thanks for the good work!

Cue Points

Can you add a way to get cue point names, and move between them? Such as next, previous and a specific cue point?

How to bind to an external device ?

Hi

First of all, thanks for your script, it's awesome.

I'm trying to build an Arduino controller that would be based on this.
I'm wondering how to get the script working with an ip that would not be 127.0.0.1

Let's say my router attributes ip 192.168.1.12 to my controller, how can I let the script know where to send messages ?

I'm pretty n00b with ips and networking, so forgive my ignorance.

Request: 3 new clip listeners

I would like to have the following 3 clip listeners that are quite essential
to build a TouchOSC template for session view:

/live/clip/start_listen/color
/live/clip/start_listen/name
/live/clip/start_listen/length

Edit: Make it 5 ;-)

/live/clip_slot/start_listen/has_clip
/live/clip_slot/start_listen/has_stop_button

Request: select_track

Could you please implement the possibility to select a track and query which track is currently selected?

for instance:
/live/song/get/selected_track
/live/song/set/selected_track

I would highly appreciate that.
Thank you!

I really like this project!

Device in a group

I can change Gain on a first device on the first channel:

Screenshot 2022-12-07 at 02 01 25

    params = (0, 0, 1.0, 1.0)
    client.send_message("/live/device/set/parameters/value", params)

How to achieve same when device is in a group?

Screenshot 2022-12-07 at 02 03 09

I guess, behind the scene, we have to get Device object from this group (it's maybe a Chain.Devices). And then use this object in /live/device/set/parameters/value . Is it supported now?

live/device/set/parameters/value Error

live/device/set/parameters/value (to set ALL device values at once) does not work and gives an error in the log.
I tried with Abletons MIDI Pitch device which has 4 parameters and sent
track_id, device_id, value, value, value, value (int, int, float, float, float, float).

/get/parameters/value, /get/parameter/value and also /set/parameter/value (to set ONE device value) work as expected.

(already reported by mail)

(2022-11-29 00:38:15,965) [INFO] AbletonOSC: Error handling message: Traceback (most recent call last):
File "C:\Users\Anja\Documents\Ableton\User Library\Remote Scripts\AbletonOSC\abletonosc\osc_server.py", line 107, in process
rv = callback(message.params)
File "C:\Users\Anja\Documents\Ableton\User Library\Remote Scripts\AbletonOSC\abletonosc\device.py", line 14, in device_callback
return func(device, *args, params[2:])
File "C:\Users\Anja\Documents\Ableton\User Library\Remote Scripts\AbletonOSC\abletonosc\device.py", line 62, in device_set_parameters_value
for index, value in params:
TypeError: cannot unpack non-iterable float object

[WinError 10035] A non-blocking socket operation could not be completed immediately

Hello everyone!
I m trying to setup AbletonOSC with Ableton 11 on Windows 10.
I get an error message the exact same way as this user on Stackoverflow
I was wondering, if there has been a solution to this and, if it is possible to find documentation about it, since I couldn't find anything online. I m suspecting it's a problem with Windows.

I get this message in the activity log file:

[INFO] AbletonOSC: Socket error: Traceback (most recent call last):
  File "C:\ProgramData\Ableton\Live 11 Suite\Resources\MIDI Remote Scripts\AbletonOSC\abletonosc\osc_server.py", line 62, in process
    data, addr = self._socket.recvfrom(65536)
BlockingIOError: [WinError 10035] A non-blocking socket operation could not be completed immediately

and at the begining of the log file:

(2021-12-15 01:05:42,667) [INFO] Starting OSC server (local ('127.0.0.1', 11000), remote ('127.0.0.1', 11001))
(2021-12-15 01:05:42,670) [INFO] Disconneting...
(2021-12-15 01:05:42,671) [INFO] Starting OSC server (local ('127.0.0.1', 11000), remote ('127.0.0.1', 11001))
(2021-12-15 01:05:43,026) [INFO] AbletonOSC: Error handling message: Traceback (most recent call last):
  File "C:\ProgramData\Ableton\Live 11 Suite\Resources\MIDI Remote Scripts\AbletonOSC\abletonosc\osc_server.py", line 64, in process
    message = OscMessage(data)
  File "C:\ProgramData\Ableton\Live 11 Suite\Resources\MIDI Remote Scripts\AbletonOSC\pythonosc\osc_message.py", line 23, in __init__
    self._parse_datagram()
  File "C:\ProgramData\Ableton\Live 11 Suite\Resources\MIDI Remote Scripts\AbletonOSC\pythonosc\osc_message.py", line 33, in _parse_datagram
    type_tag, index = osc_types.get_string(self._dgram, index)
  File "C:\ProgramData\Ableton\Live 11 Suite\Resources\MIDI Remote Scripts\AbletonOSC\pythonosc\parsing\osc_types.py", line 86, in get_string
    return data_str.replace(b'\x00', b'').decode('utf-8'), start_index + offset
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe5 in position 0: invalid continuation byte

One thing to note: I can not rule out user error to 100%.
Thank you

Return errors via OSC?

Should some errors and infos that go to the log maybe also be returned via OSC
in some way, maybe as /AbletonOSC/error (string) or something? Not to be handled
by TouchOSC in any way, just to provide a realtime feedback in the OSC monitor when
something is wrong.
Wouldn't do any harm when everything is ok as it wouldn't cause any additional messages,
but would make it easier to try and explore AbletonOSC and to develop and debug tools using
AbletonOSC as it would be easier to understand why nothing happens in some situations.

Some examples how that came to my mind:
AttributeError: 'NoneType' object has no attribute 'color' - when you try to get color from an empty clip slot.
AttributeError: 'NoneType' object has no attribute 'gain' - when you try to get gain from an empty clip slot.
RuntimeError: Gain is only available for Audio Clips - when you try to get gain from a midi clip.
etc.

And for any kind of typos or calls that are not available:
[INFO] AbletonOSC: Unknown OSC address: /live/track/get/clips/colr
[INFO] AbletonOSC: Unknown OSC address: /live/track/get/clips/color_index

Getter for Song length

I noticed the current Song endpoint doesn't have a getter for song_length though it's part of the LOM. Is it possible to add this getter?

Generic track "start_listen" command

I wondered if it would be possible to have a generic track 'start_listen' command that would allow for any property on a given track to be listened for, rather than having to be specific about the property which is currently the case.

e.g.
/live/track/start_listen 0

With this you could listen for mute, volume, send level, name, color etc. etc.

live/song/get/cue_points Error

live/song/get/cue_points is checked off in my list, so I suppose it worked, but now it doesn't and gives me an error in the log.

(2022-11-29 00:25:33,451) [INFO] AbletonOSC: Error handling message: Traceback (most recent call last):
File "C:\Users\Anja\Documents\Ableton\User Library\Remote Scripts\AbletonOSC\abletonosc\osc_server.py", line 110, in process
assert isinstance(rv, tuple)
AssertionError

Clips in arrangement view

Hi,
Great job so far reworking the LiveOSC. Woudl it be possible to be able to get clips that are on a track in arrangement view?
Thanks for all the great work so far!

Judah

Request: Let new listener trigger a 'get'

At the moment listeners only send information when a change occurs.
So to get the actual state of something in the beginning you have to manually query it before starting a listener.

Would it be possible to let a new listener do this automatically when it is started?

Example:
/live/track/start_listen/color (track_id) would automatically trigger a /live/track/get/color (track_id) once.

This would automatically update a control when a listener is started and do no harm if it is already up-to-date for some reason.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.