maxspahn / gym_envs_planar Goto Github PK
View Code? Open in Web Editor NEWPlanar gym environments
Planar gym environments
Currently, obstacles (static and dynamic) and goals (static and dynamic) can be added to the scene.
This is very helpful in visualization. However, it would be even better if the information could be 'sensed' and potentially passed to the observation if requested.
The idea:
The code should be something like:
sensor = PseudoSensor(obstacles=True, goals=True)
env.addSensor(sensor)
In the simulation loop the observation returned by the stepper should then include the information of obstacles and goals at that very time step.
A sensor class has already been added in the urdf environment: https://github.com/maxspahn/gym_envs_urdf/tree/master/sensors.
For this toy-like environment, a pseudo sensor should be enough to test motion planning.
Contact me if you have any questions on it.
Hi, I am trying to make stable-baselines3 work in this project,
after updating a few packages, all other examples keep working.
However, n_link_researcher reports the following error
Traceback (most recent call last): File "/home/skylove/test/gym_envs_planar/examples/n_link_reacher.py", line 47, in <module> main() File "/home/skylove/test/gym_envs_planar/examples/n_link_reacher.py", line 41, in main ob, _, _, _ = env.step(action) File "/home/skylove/.cache/pypoetry/virtualenvs/planarenvs-_YPGfD7A-py3.9/lib/python3.9/site-packages/gym/wrappers/order_enforcing.py", line 11, in step observation, reward, done, info = self.env.step(action) File "/home/skylove/test/gym_envs_planar/planarenvs/planar_common/planar_env.py", line 115, in step self.render() File "/home/skylove/test/gym_envs_planar/planarenvs/n_link_reacher/envs/n_link_reacher_env.py", line 48, in render self.render_common(bounds) File "/home/skylove/test/gym_envs_planar/planarenvs/planar_common/planar_env.py", line 185, in render_common goal.renderGym(self._viewer, rendering, t=self.t()) File "/home/skylove/.cache/pypoetry/virtualenvs/planarenvs-_YPGfD7A-py3.9/lib/python3.9/site-packages/MotionPlanningGoal/dynamicSubGoal.py", line 49, in renderGym tf2 = rendering.Transform(rotation=self.angle()) File "/home/skylove/.cache/pypoetry/virtualenvs/planarenvs-_YPGfD7A-py3.9/lib/python3.9/site-packages/gym/envs/classic_control/rendering.py", line 228, in __init__ self.set_rotation(rotation) File "/home/skylove/.cache/pypoetry/virtualenvs/planarenvs-_YPGfD7A-py3.9/lib/python3.9/site-packages/gym/envs/classic_control/rendering.py", line 246, in set_rotation self.rotation = float(new) TypeError: float() argument must be a string or a number, not 'NoneType'
When I downgrade the following packages, the n_link_researcher does not report any errors:
• Updating wrapt (1.14.1 -> 1.14.0) • Updating astroid (2.11.5 -> 2.11.3) • Updating pylint (2.13.9 -> 2.13.7) • Updating forwardkinematics (0.7.0 -> 0.6.0 f06f2ea) • Updating motion-planning-scenes (0.1.16 -> 0.1.13 86976ff) • Updating pyglet (1.5.24 -> 1.5.23)
I have tested that the forwardkinematics , pyglet could not be the cause(manually change the version, it does not break the code).
When executing examples/ground_robot_arm.py
, an error is thrown, due to wrong amount of arguments passed to the function reset()
Error message:
ob = env.reset(np.array([0.0, 0.0, 0.0, 0.0]), np.array([1.0, 0.0]), np.array([0.0]))
TypeError: reset() takes 1 positional argument but 4 were given
similar issue as in #5
Joint limits are currently static class attributes and cannot be modified from the outside. This should be possible through either their init function or a separate function.
Important for ground robots and point masses, this should effect the size of the window.
When launching the groundRobot example, an error is thrown due to a wrong number of arguments, see link groundRobot.py
Needs fixing in groundRobot class: function definition.
Seems to be a confusion between vel_dd.py and vel.py.
Since #33, the ground robot does not work anymore because the sensorState
is never initialized and remains None
after reset.
Then _get_ob
fails.
@alxschwrz
Can you maybe fix this? Either initiliaze self.sensorState
in groundRobotEnv.reset()
as empty dict
or directly initialize it as such in the constructor.
No obstacles and goals can be provided to the environments at the moment.
Such options should be provided, either through the constructor or through a separate function.
The information on goals and obstacles can be explicitly (coordinates) or implicitly (simulating sensing) through observations.
This is a huge effort and I suggest to start with the simple point robot example to try out.
Still, there are some inconsistent reset functions, see nLinkReacher/envs/tor.py
A fix is urgent.
It would be nice to have the rendering adapted to specific parameters of each experiment.
This could be achieved by setting the size of the Matplotlib window here to the specified MAX_POS values of each experiment .
To make the future development easier, it is better to keep some variable names as same as the default.
such as _observation_space
-> observation_space
.
The ground robot supports an arm with only one link.
While the structure with self._n_arm is already in place, the rendering has to be adapted, see rendering
Also requires verification of the dynamics function and the integration scheme.
Could not run examples/nLinkReacher.py
Error in line 19 of nLinkReacher.py:
time.sleep(env._dt)
"attempted to get missing private attribute '{}'".format(name)
AttributeError: attempted to get missing private attribute '_dt'
For reproduction:
Running nLinkReacher.py in python3 venv
Python version: 3.7.4
The function, which is implemented in
resets the Limits of the rendered window. While it does affect the rendering, it currently does not affect the observationSpace.
self._limUpPos
etc.
Should the observation space be set according to the values given in resetLimits
or should they stay the same?
If they should be set accordingly, I would propose to directly address them and not reset the observationSpace again.
I am working on the Pseudosensor right now in #31 , which also sets the observation space. I could address this issue in there.
Before the package can be released on pypi, the structure should be cleaned up.
Ideally using poetry.
Currently, all robot state information are concatenated in one numpy.array.
For the simplest robots (nLinkReacher, pointRobot), this is sufficient but for more complex robots (groundRobot with differential drive), this can quickly become confusing: In which position do you find what state information. It becomes even trickier when information on obstacles should be returned.
To improve the readability of the code, it would be beneficial to use dictonaries instead for the returned values.
In doing so, the observation spaces need to be adapted to the type Dict
, see https://github.com/openai/gym/blob/master/gym/spaces/dict.py.
Here, it might be important to also integrate verification of the returned values and the defined observation space.
In the current implementation, rendering is invoked by the running in every time step (for example:
gym_envs_planar/examples/mobile_robot.py
Line 20 in 4cb59ee
It would be better and more compliant to other environments to have an option in the constructor gym_envs_urdf.
I have already changed that for two environments, /pointRobot/envs/acc.py and /nLinkReacher/envs/acc.py in the commit 4cb59ee.
That change should be applied for all environments to maintain easy switching between environment control types.
The ground robot supports an arm with only one link.
While the structure with self._n_arm is already in place, the rendering has to be adapted, see rendering
Also requires verification of the dynamics function and the integration scheme.
When installing with python3.10, casadi cannot be found.
Traceback (most recent call last):
File "/home/nagrawal/.cache/pypoetry/virtualenvs/plannerbenchmark-nPaI4okZ-py3.8/lib/python3.8/site-packages/gym/envs/classic_control/rendering.py", line 27, in
from pyglet.gl import *
File "/home/nagrawal/.cache/pypoetry/virtualenvs/plannerbenchmark-nPaI4okZ-py3.8/lib/python3.8/site-packages/pyglet/gl/init.py", line 95, in
from pyglet.gl.gl import *
File "/home/nagrawal/.cache/pypoetry/virtualenvs/plannerbenchmark-nPaI4okZ-py3.8/lib/python3.8/site-packages/pyglet/gl/gl.py", line 45, in
from pyglet.gl.lib import link_GL as _link_function
File "/home/nagrawal/.cache/pypoetry/virtualenvs/plannerbenchmark-nPaI4okZ-py3.8/lib/python3.8/site-packages/pyglet/gl/lib.py", line 149, in
from pyglet.gl.lib_glx import link_GL, link_GLU, link_GLX
File "/home/nagrawal/.cache/pypoetry/virtualenvs/plannerbenchmark-nPaI4okZ-py3.8/lib/python3.8/site-packages/pyglet/gl/lib_glx.py", line 46, in
glu_lib = pyglet.lib.load_library('GLU')
File "/home/nagrawal/.cache/pypoetry/virtualenvs/plannerbenchmark-nPaI4okZ-py3.8/lib/python3.8/site-packages/pyglet/lib.py", line 164, in load_library
raise ImportError('Library "%s" not found.' % names[0])
ImportError: Library "GLU" not found.
Sensor is currently placed outside the planarenvs
-directory.
As a consequence sensors are not installed when installing the package.
I am currently migrating my planner to use the integrated pseudo-sensor for goals (soon also obstacles).
I run into a potential bug when using the goal position sensor:
When the goal exceeds the limit set in the sensor, the information is simply clipped to the limit.
While this seems to make sense when using relative distance sensor, it it restrictive with the absolute position sensor.
Was this the intended behavior, @alxschwrz ?
pylint reports some invalid-name such as _limUpPos
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.