Giter Site home page Giter Site logo

ai's People

Contributors

martinkirst avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ai's Issues

Asset Store: Improved description

Looking at similar assets in the Asset Store (e.g. Behavior Designer), we should improve the description to contain more details and use cases.

Tasks

  • Improve description of Polarith AI Free
  • Improve description of Polarith AI Pro

Unexpected behaviour in mathv.GetNearestEdge()

mathv.GetNearestEdge() returns the first edge where the given point is orthogonal to the edge. For points that are not orthogonal to any edge the closest one is returned.

nearestEdge(modified)

I would expect that also for points that are orthogonal to any edge there is a check which edge is closer.

Behaviours: Caching information about percepts

A couple of users intuitively seem to look for functionality which allows them to extract most important information about percepts out of behaviours, especially radius behaviours should provide some information about the game objects which are located within the radii.

Before we can work on this issue, we need to do a concept and to provide some tasks.

Won't work in Unity WebGL

I've gotten the following bug using the latest version of Polarith, as well as Unity 2017.3.1p4.

Using a new project, I make a webgl build with just the 2D boids scene, all default build parameters.

It won't load in the browser, with the following error message:

Not implemented: Class::FromIl2CppType

Which repeats until the webgl crashes.

Advanced asynchronous load balancing

Currently, the Asynchronous functionality in AIMThreading is a little bit inconsistent. The time needed to call all PrepareEvaluation methods is not considered at all. In consequence, this leads to micro lags when all agents have finished their work and the system begins to udate everything again. A possible improvement would be to measure the time during PrepareEvaluation and to make a yield if the upper bound of the set update frequency is reached. So the next time the AI wants to update, we can continue to call PrepareEvaluation. This can also be done while the agents are updated on their own sub-threads. One consequence of this would be that the agents work with different views of the world, but I think, this is to be expected for a real Asynchronous functionality. ๐Ÿ˜‰

Multi-scene editing

It seems that the AIMSteeringPerceiver does not support Unity's multi-scene editing at the moment. We should enhance this situation.

Tasks

  • Analyze the code base for changes to be made
  • Discussion with the team and/or the community
  • Implement a solution
  • Test the new implementation
  • Polishing
  • Final testing

Overhauled custom UI

In order to improve the overall usability, we decided to overhaul the current user interface. The major workflow will benefit a lot from the new UI concept. So, we designed it especially for doing all things much quicker and for reducing visual cluttering during your work as best as possible.

Transformation artifacts concerning static objects

Enabling our AI currently results in a bug regarding static objects. We are valiantly working on solving the issue.

static-bug

Cause

It is due to the way we transform objects in order to obtain visual (mesh) oriented bounding boxes (OBBs).

Effect

Static objects transform to other objects in the scene or disappear completely.

Workaround

For now, there is no chance to work with static objects until the upcoming patch.

Packages: Minor naming mistakes

Currently, we've got a minor naming issue concerning the 'Character' model within our Unity packages. Parts of the model, as well as material names, accidentally contain the name 'Figure'.

To fix this, we need to re-export the FBX file(s).

Tasks

  • Rework the FBX file in Blender
  • Re-export the FBX file properly
  • Check if the packages are fine then

Pathfinding integration

With feature #10 as a basis, we are able to utilize Unity's inbuilt pathfinding functionality to bring together the best of both worlds: Local and global decision making for movement in one lightweight plugin.

Rigidbody2D rotation issues

I tried your support form late Saturday night and have had no response yet, so I'll try here.

As soon as I attach a target game object in seek or follow or pursue (or anywhere it seems), that target object starts glitching. Disable the seek component, or remove the game object from the target list, and the target object behaves fine. Thus, it is pretty clearly Polarith related. The weirdest part is that I haven't put any AIM components on the target object yet, and yet, AIM is somehow messing with it.

Anyhow, after a ton of testing and digging, I noticed that the rotation value on the target Rigidbody2D is being reset somehow to stay in the -180 to 180 range. It normally rotates to 360 and beyond. This resetting may not be a problem with singular objects but this reset really messes with the transformations of objects joined with 2D joints to that object. For example, in my game, GRITS Racing, I have cars with breakable wheels. Thus, the wheels cannot be children to the body transform. Thus, when Polarith rotates the RB of the car body 360 degrees to stay in bounds, it whacks out the wheels. Something else is also messing with driving straight, but I haven't figure that one out yet... but I bet it's something similar with Polarith.

I still have enough control to keep building up the AI but I can't publish anything with Polarith until this is fixed.

Copyright date 2018

A happy new year and adapting of the code and DLL information concerning copyrights.

Tasks

  • Update coypright date in assembly information
  • Update copyright date in code

AIMUnityPathfinding

DestinationGameObject must be set on start. Otherwise, it won't update correctly on runtime with AIMFollowWaypoints.

Inbuilt path structure and behaviours

Currently working on a first patch for Polarith AI. Besides minor and major bug fixing, this will include a brand-new path structure you can comfortably use to achieve things like autonomous patrols.

path

Moreover, best thing about this patch is that it builds the foundation for our own pathfinding in the future. Keep your eyes peeled!

Upcoming Features

  • Path component
  • Comprehensive tools for creating and manipulating paths
  • New AI behaviour: Path following

Packages: Physics and behaviour tweaks

With our 'Shiny Packages' update, we added a lot of new example scenes. Even though they work well in general, we received feedback that agents tend to behave problematically under certain circumstances, especially the physics in the Lab scenes can cause problems with certain Unity versions. Moreover, the behaviour parameters of the TinyWood agent can be optimized as well.

Tasks

  • Optimize the physics in the Lab scenes
  • Optimize the TinyWood scene

AIMContextEvaluation must not be a class

The class AIMContextEvaluation can be an interface. This way, it would be far more flexible when users want to write an own update manager, e.g., for achieving determinism.

With an interface, developers would be able to use their already existing update manager by just implementing an additional interface instead of having an extra (almost empty) game object which has to live in parallel only because we enforce a MonoBehaviour with the abstract class AIMContextEvaluation.

Tasks

  • Rework AIMContextEvaluation to be an interface IContextEvaluation
  • Adapt corresponding classes which use IContextEvaluation appropriately

Community feedback: Documentation

Call for your help!

We want to improve our documentation. Since we as the developers have all the background knowledge about the system and especially the source code, it's hard to find lacks in the documentation.
We call for your help to fix this for you. If you find unclear or weak formulations in the documentation, especially in the API reference, you are welcome to blame us :) So please show us the parts that confuse you.
Before you post a problem, please make sure you have read the documentation for this behaviour entirely. Meaning: both the front- and back-end API, the component reference and the same for the base class.

Post it here using the following pattern:

<Component name>, < link to doc >, "unclear formulation"
'Your problem of understanding in a brief explanation'

Example:

Orbit, Orbit.DeltaAngle, "Specifies the target position on the orbit."
How is the target position specified by the DeltaAngle? What does the angle influence and how?

Perception range for bounds

Dennis send an email:

I have an agent that is using SeekBounds (using Collider OBB) in order to avoid environment objects like buildings in my scene. These buildings may have a center position of (0, 0, 0), with a width and length of 10 units and a height of 20 world units. I've noticed that when the agent is flying around high (e.g. at y=15), the agent doesn't detect the building percept for the SeekBounds behavior.

After debugging this, I wanted to inquire about the following:

  1. In AIMSteeringPerceiver.GetPerceptsInRange(...), the code calculates if ((percept.Position - point).sqrMagnitude > rangeSqr) to see if the percept is within range. However, should this be calculating the closest point to the object collider bounds? Otherwise if the agent is at (5, 15, 5) and very close to the building, then the above calculation would calculate a distance using the position (0, 0, 0). This would like to a distance that is much greater than 5, and the percept won't be detected.
  2. The same applies to RegularGrid.Query(...), where the code calculates if ((cellElement.Position - point).sqrMagnitude < rangeSqr) to see if the percept is within range.
  3. If (2) is true, would that mean that we also would need to populate the percepts in each grid cell in RegularGrid.UpdateGrid(...) using the object collider bounds instead of the object position? Then in addition, would that meansthat a percept could appear in multiple grid cells, so we would need to check if a cellElement was already in the previously tested set of cellElements so we don't duplicate calculations and add the same cellElement to percepts (lines 258-268)?

Update of freely available source code

As you might have noticed, within this repository, we offer a couple of scripts which are part of Polarith AI for studying major functionalities. Due to the amount of new development work we've done in the last time, we should update them if necessary.

Tasks

  • Check this public repository for updates
  • Update the freely available source code in this public repository
  • Check the asset packages for updates
  • Update the freely available source code in the asset packages

Racing behaviours

With the inbuilt path structure as a basis, we are going to create new behaviours and tools which will make it very easy to setup natural and astonishing racing AI agents.

Our plan is to use multiple paths for defining the racetrack as well as using paths to define ideal trajectories for agents. Context steering will then be applied using a linear sensor to sample the racetrack for ideal movement and dangerous things.

More details follow when we are starting the development phase of v1.1.

Automatic LOD switching

Our technology provides very powerful scaling abilities. We are going to make it more comfortable to use this feature, so we will provide you a handy component to setup different sensors for different level of details you possibly want to have for your agents.

Like its known from typical approaches in computer graphics for meshes, you will be able to define different sets of sensors to use for different situations. For example, if an agent is far away from the player, it can use a low-res sensor. As against, if it is close to the player, it can use a high-res sensor for an overall smoother and more accurate movement.

More details follow when we are starting the development phase of v1.1.

Seek/Flee NavMesh should detect agents outside

As our friends at ION LANDS pointed out, at the moment, issues can arise when using AIMSeekNavMesh/AIMFleeNavMesh together with AIMUnityPathfinding.

Because Unity's pathfinding algorithm often tends to place waypoints near to NavMesh corners, especially for small meshes, the pathfinding and the NavMesh detection stand in their way each other. At a first look, this problem cannot be solved too easily because Unity's API for detecting if an agent is outside of a NavMesh is very rudimentary and expensive as well.

One possible solution: We can utilize Unity's methods for detecting an agent's position relative to a NavMesh. Also, this might be very expensive though. Then, we can invert the magnitudes of the behaviours: When an agent is outside of a NavMesh, an AIMSeekNavMesh will become an AIMFleeNavMesh and vice versa.

Tasks

  • Investigate in different solutions
  • Evaluate alternate solutions
  • Implementation of the 'best' solution
  • Testing

Update to latest Doxygen

At the moment, we need to stay on Doxygen 1.8.13 due to some HTML changes in newer versions. For example, the left nav tree seems to be incomplete and some margins/paddings, especially on the start page, seem to be broken.

Nevertheless, Dimitri van Heesch made a lot of improvements to Doxygen, especially regarding source code parsing so that the Polarith AI docs would benefit from an update.

Tasks

  • Find out all problems using the latest Doxygen version
  • Fix all issues
    • Left nav tree incomplete
    • Broken margins/paddings (on start page)
  • Review new docs

Non-Spherical Radius Gizmos

Our friend BCFEGames from the Unity forum found a bug that AIMReduction shows only a planar gizmo in the xy-plane. In use with a spherical sensor, there should be a spherical gizmo like in our radius steering behaviours.

  • Check all behaviours for consistency
  • Fix gizmos

Update 3D controllers for adaptive speed

Currently, our 3D controllers do are not able to turn the magnitude of the decision into a speed factor. This should be optional as in our other controllers.

Bugs in the release version

The following bugs are known to be present in the release version. With the first patch, we are going to fix them all.

Known Bugs in v1.0

  • Context component
    • Sporadic exception if ShowReceptors is set to true
    • Indicator visualization does not consider object scale
    • API offers no possibilities to set the epsilon-contraints in code
  • Shaper component
    • Shaper does not consider the sensor orientation
    • Adding receptors by using ctrl + click works only in PlaneXY mode
    • Planar orientation field does not show the exact status of the sensor asset
  • Orbit behaviour parameter DistanceMapping is obsolete and has no effect
  • Interpolation does not distinguish between minimized and maximized objectives
  • Minimal amount of GC in AI behaviours
  • Minor documentation design bugs on mobile devices

Layer based perception

Another idea to improve the way users can work with scene objects: let's use Unity layers for specifying AI environments as an alternative to game object lists.

Velocity Gizmo missing

Check the availability of the velocity gizmo in behaviours derived directly from SteeringBehaviour.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.