Giter Site home page Giter Site logo

pyperplan's People

Contributors

bonetblai avatar jendrikseipp avatar maltehelmert avatar robertmattmueller avatar tomsilver avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pyperplan's Issues

multiples runs return different heuristics for the same state

Hi,

I have interest to use the heuristic modules, but when I run the same script, which contain the definition of a task and a state, return different heuristics values. Are there something wrong in my code or is a problem in pyperplan?

The code is following, and if you run this multiples times can provide differents values.

from pyperplan.pddl.pddl import Domain 
from pyperplan.task import Task, Operator

from pyperplan.heuristics.blind import BlindHeuristic
from pyperplan.heuristics.lm_cut import LmCutHeuristic
from pyperplan.search.searchspace import SearchNode

# (name, pre, add, del) 
op1 = Operator("op1", set((1,2)), set((3,4)), set((1,2)))
op2 = Operator("op2", set((3,4)), set((1,2)), set((3,4)))

# (name, facts, initial_state, goals, operators):
t1 = Task("t1", set((-1, -2, -3, -4, 1,2,3,4)), set((1,2)), set((3,4)), set((op1,op2))  )

h_blind  = BlindHeuristic(t1)
h_lmcut  = LmCutHeuristic(t1)

nodo1 = SearchNode(set((1,2)), None, "ac1", 0)
hstate = h_blind(nodo1)
print("heuristic nodo 1", hstate)

nodo2 = SearchNode(set((4,3, -1)), None, "ac2", 0)
hstate = h_lmcut(nodo2)
print("heuristic nodo 2", hstate)

Thanks

Enforced Hill-Climbing returns unsolvable for airport-16

Original report by Martin Goth (Bitbucket: mgoth, ).


Enforced hill climbing with hadd returns unsolvable on airport/task16.pddl while other planner configurations return a solution:

python3 target_algorithms/planning/pyperplan/algorithm/pyperplan.py -s ehs -H hadd instances/planning/data/pyper-benchmark-airport/task16.pddl
2016-02-01 11:06:35,300 INFO     Found domain /home/gothm/thesis/aclib_repo/working/instances/planning/data/pyper-benchmark-airport/domain16.pddl
2016-02-01 11:06:35,300 INFO     using search: enforced_hillclimbing_search
2016-02-01 11:06:35,300 INFO     using heuristic: hAddHeuristic
2016-02-01 11:06:35,300 INFO     Parsing Domain /home/gothm/thesis/aclib_repo/working/instances/planning/data/pyper-benchmark-airport/domain16.pddl
2016-02-01 11:06:35,350 INFO     Parsing Problem /home/gothm/thesis/aclib_repo/working/instances/planning/data/pyper-benchmark-airport/task16.pddl
2016-02-01 11:06:35,355 INFO     12 Predicates parsed
2016-02-01 11:06:35,355 INFO     145 Actions parsed
2016-02-01 11:06:35,355 INFO     0 Objects parsed
2016-02-01 11:06:35,355 INFO     53 Constants parsed
2016-02-01 11:06:35,355 INFO     Grounding start: problem_x
2016-02-01 11:06:35,384 INFO     Relevance analysis removed 236 facts
2016-02-01 11:06:35,384 INFO     Grounding end: problem_x
2016-02-01 11:06:35,384 INFO     618 Variables created
2016-02-01 11:06:35,384 INFO     580 Operators created
2016-02-01 11:06:35,386 INFO     Search start: problem_x
2016-02-01 11:06:35,389 INFO     Initial h value: 244.000000
2016-02-01 11:06:53,964 INFO     No operators left. Task unsolvable.
2016-02-01 11:06:53,964 INFO     280 Nodes expanded
2016-02-01 11:06:53,965 INFO     Search end: problem_x
2016-02-01 11:06:53,965 INFO     Wall-clock search time: 1.9e+01
2016-02-01 11:06:53,966 WARNING  No solution could be found
python3 target_algorithms/planning/pyperplan/algorithm/pyperplan.py -s wastar -H hff instances/planning/data/pyper-benchmark-airport/task16.pddl
2016-02-01 11:07:13,727 INFO     Found domain /home/gothm/thesis/aclib_repo/working/instances/planning/data/pyper-benchmark-airport/domain16.pddl
2016-02-01 11:07:13,727 INFO     using search: weighted_astar_search
2016-02-01 11:07:13,727 INFO     using heuristic: hFFHeuristic
2016-02-01 11:07:13,728 INFO     Parsing Domain /home/gothm/thesis/aclib_repo/working/instances/planning/data/pyper-benchmark-airport/domain16.pddl
2016-02-01 11:07:13,782 INFO     Parsing Problem /home/gothm/thesis/aclib_repo/working/instances/planning/data/pyper-benchmark-airport/task16.pddl
2016-02-01 11:07:13,786 INFO     12 Predicates parsed
2016-02-01 11:07:13,786 INFO     145 Actions parsed
2016-02-01 11:07:13,787 INFO     0 Objects parsed
2016-02-01 11:07:13,787 INFO     53 Constants parsed
2016-02-01 11:07:13,787 INFO     Grounding start: problem_x
2016-02-01 11:07:13,815 INFO     Relevance analysis removed 236 facts
2016-02-01 11:07:13,815 INFO     Grounding end: problem_x
2016-02-01 11:07:13,815 INFO     618 Variables created
2016-02-01 11:07:13,815 INFO     580 Operators created
2016-02-01 11:07:13,817 INFO     Search start: problem_x
2016-02-01 11:07:13,821 INFO     Initial h value: 79.000000
2016-02-01 11:08:07,439 INFO     Goal reached. Start extraction of solution.
2016-02-01 11:08:07,439 INFO     5522 Nodes expanded
2016-02-01 11:08:07,449 INFO     Search end: problem_x
2016-02-01 11:08:07,449 INFO     Wall-clock search time: 5.4e+01
2016-02-01 11:08:07,451 INFO     Plan length: 83
2016-02-01 11:08:07,464 INFO     validate could not be found on the PATH so the plan can not be validated.

Expected behaviour: Enforced hill-climbing only returns unsolvable if there exists no solution for the task.

string representation for `Problem` objects

Hello!

The string representation for problem currently prints out object types, rather than the names of objects themselves. For example:

ipdb> print(problem)
< Problem definition: logistics-4-0 Domain: logistics Objects: ['airplane', 'airport', 'airport', 'location', 'location', 'city', 'city', 'truck', 'truck', 'package', 'package', 'package', 'package', 'package', 'package'] Initial State: ["at[('apn1', airplane), ('apt2', airport)]", "at[('tru1', truck), ('pos1', location)]", "at[('obj11', package), ('pos1', location)]", "at[('obj12', package), ('pos1', location)]", "at[('obj13', package), ('pos1', location)]", "at[('tru2', truck), ('pos2', location)]", "at[('obj21', package), ('pos2', location)]", "at[('obj22', package), ('pos2', location)]", "at[('obj23', package), ('pos2', location)]", "in-city[('pos1', location), ('cit1', city)]", "in-city[('apt1', airport), ('cit1', city)]", "in-city[('pos2', location), ('cit2', city)]", "in-city[('apt2', airport), ('cit2', city)]"] Goal State : ["at[('obj23', (physobj,)), ('pos1', (place,))]", "at[('obj13', (physobj,)), ('apt1', (place,))]", "at[('obj21', (physobj,)), ('pos1', (place,))]"] >

This is due to this line:

[self.objects[o].name for o in self.objects],

Is this intentional? Or are the object names meant to be displayed instead? For example, changing the linked line to [o for o in self.objects], would result in

ipdb> print(problem)
< Problem definition: logistics-4-0 Domain: logistics Objects: ['apn1', 'apt1', 'apt2', 'pos2', 'pos1', 'cit2', 'cit1', 'tru2', 'tru1', 'obj23', 'obj22', 'obj21', 'obj13', 'obj12', 'obj11'] Initial State: ["at[('apn1', airplane), ('apt2', airport)]", "at[('tru1', truck), ('pos1', location)]", "at[('obj11', package), ('pos1', location)]", "at[('obj12', package), ('pos1', location)]", "at[('obj13', package), ('pos1', location)]", "at[('tru2', truck), ('pos2', location)]", "at[('obj21', package), ('pos2', location)]", "at[('obj22', package), ('pos2', location)]", "at[('obj23', package), ('pos2', location)]", "in-city[('pos1', location), ('cit1', city)]", "in-city[('apt1', airport), ('cit1', city)]", "in-city[('pos2', location), ('cit2', city)]", "in-city[('apt2', airport), ('cit2', city)]"] Goal State : ["at[('obj23', (physobj,)), ('pos1', (place,))]", "at[('obj13', (physobj,)), ('apt1', (place,))]", "at[('obj21', (physobj,)), ('pos1', (place,))]"] >

I'd be happy to submit a pull request if you prefer that change. Thanks for your time!

Add setup.py

Original report by Anonymous.


Can you please add a setup.py to this distribution? Without it is difficult to import this dependency and use the module.

Support for :equality keyword

Pyperplan does not support the :equality keyword. When enabled, the predicate = becomes available for testing equality among objects. In the (far) past, there was also a difference when grounding operators. By default (no :equality), objects could appear at most once for different arguments of grounded actions (i.e., no object could repeat in the arguments for a grounded action). When enabled, such repetitions are allowed but then the predicate = becomes available.

Nodes expanded for pyperplan vs. fast downward

Hello! Thanks again for making this great library available.

This isn't an issue, but a bit of a puzzle, and I was wondering if you would know the answer right away.

I'm comparing the number of nodes expanded by pyperplan and fast downward. For pyperplan, I'm using hFF and GBFS. For fast downward, I'm looking at two configurations: --alias lama-first and --search eager_greedy([ff()]).

I was expecting to see that in most cases, the number of nodes expanded by pyperplan (hFF / GBFS) would be similar to the number of nodes expanded by fast downward (hff / GBFS), or that fast downward would be better due to some optimizations. I was also expecting that fast downward with lama-first would be better than both.

But instead, I'm finding that the number of nodes expanded by pyperplan is often better than both versions of fast downward in terms of nodes expanded. For example, here are some results averaged across the first 10 problems for each of the listed domains (from the benchmarks in pyperplan).

Screen Shot 2022-08-04 at 6 38 38 PM

For example, in elevators, pyperplan only expands 33.9 nodes on average, while fast downward with hFF / GBFS expands 526.20 and lama-first expands 97.80.

It's certainly possible that I am doing something silly in the way that I am collecting these results, but I wanted to see if any possible causes occurred to you immediately.

Thanks for your time!

pip install issue

Thanks very much for this extremely useful package!

I wanted to flag a minor issue with pip install pyperplan. Installing the package in this way seems to add the contents of src/ directly to the installation directory (e.g. site-packages). So for example, rather than having a pyperplan directory in my site package, I have a heuristics directory, a pddl directory, etc.

This causes conflicts if I am using pyperplan within a project that has files or directories with the names heuristics, pddl, etc.

Do you plan to improve this library?

I have a code base that cleans up the parser (mainly type annotations) and a separate branch that supports action costs.
Is this repo open for pull-requests like this?

Iterative Deepening Search only outputs "Goal reached" on loglevel debug

Original report by Martin Goth (Bitbucket: mgoth, ).


Example call:

> python3 src/pyperplan.py -l info -H hff -s ids benchmark/blocks/task01.pddl
2016-01-29 16:59:40,292 INFO     Found domain /home/gothm/thesis/aclib_repo/target_algorithms/planning/pyperplan/pyperplan-files/benchmarks/blocks/domain.pddl
2016-01-29 16:59:40,292 INFO     using search: iterative_deepening_search
2016-01-29 16:59:40,292 INFO     using heuristic: None
2016-01-29 16:59:40,292 INFO     Parsing Domain /home/gothm/thesis/aclib_repo/target_algorithms/planning/pyperplan/pyperplan-files/benchmarks/blocks/domain.pddl
2016-01-29 16:59:40,294 INFO     Parsing Problem /home/gothm/thesis/aclib_repo/target_algorithms/planning/pyperplan/pyperplan-files/benchmarks/blocks/task01.pddl
2016-01-29 16:59:40,296 INFO     5 Predicates parsed
2016-01-29 16:59:40,296 INFO     4 Actions parsed
2016-01-29 16:59:40,296 INFO     4 Objects parsed
2016-01-29 16:59:40,296 INFO     0 Constants parsed
2016-01-29 16:59:40,296 INFO     Grounding start: blocks-4-0
2016-01-29 16:59:40,297 INFO     Relevance analysis removed 0 facts
2016-01-29 16:59:40,297 INFO     Grounding end: blocks-4-0
2016-01-29 16:59:40,297 INFO     29 Variables created
2016-01-29 16:59:40,297 INFO     40 Operators created
2016-01-29 16:59:40,297 INFO     Search start: blocks-4-0
2016-01-29 16:59:40,299 INFO     iterative_deepening_search: depth=6 planlength=6 
2016-01-29 16:59:40,299 INFO     65 Nodes expanded
2016-01-29 16:59:40,299 INFO     Search end: blocks-4-0
2016-01-29 16:59:40,299 INFO     Wall-clock search time: 0.0021
2016-01-29 16:59:40,299 INFO     Plan length: 6
2016-01-29 16:59:40,303 INFO     validate could not be found on the PATH so the plan can not be validated.

Expected result: Pyperplan outputs Goal reached. Start extraction of solution. as the other search engines do. This makes it easier to consistently parse the output of pyperplan for algorithm configuration.

Unchecked use of inexistent parameters in actions

I just noticed that the parser accepts action schemas where the atoms make reference to inexistent variables. For example, if the parameters are (?x ?y), the schema may have a precondition (clear ?z) or effect (on ?x ?z).

Error variables must start with a "?"

I've tried to run this on my own pddl files, and files I've found online. I've also looked through the PDDL files and the variables definitely do start with a ?. How do I fix this?

Error unknown predicate and used in precondition of action

I am getting this error:

Error unknown predicate and used in precondition of action

from this code:

    (:action PICKUP_NORMAL
        :parameters (?b - Bot ?a - Aisle ?s - Shelf)
        ; The bot is in the aisle (At ?b ?a)
        ; The bot is not holding anything (not (Holding ?b))
        ; The bot can pick up (CanPickUp ?a ?s)
        ; The shelf must not be weighed (not (WeighableShelf ?s))
        :precondition (and
            (At ?b ?a)
            (and
                (not (Holding ?b))
                (and
                    (CanPickUp ?a ?s)
                    (not (WeighableShelf ?s))
                )

            )
        )
        ; The bot is holding something (Holding ?b)
        ; The bot is holding the shelf (HoldingShelf ?b ?s)
        :effect (and
            (Holding ?b)
            (HoldingShelf ?b ?s)
        )
    )

Allow for removal of static facts in init and removal of irrelevant operators to be optional

It will be useful to add an option to the grounder to control the removal of static facts from the initial state as well as the removal of irrelevant operators. At this stage, this option does not to be exported to the top-most executable, it would be enough to add two options to _ground() and ground() as follows:

def ground(problem, remove_statics_from_initial_state=True, remove_irrelevant_operators=True):
def _ground(problem, remove_statics_from_initial_state=True, remove_irrelevant_operators=True):

If you agree, please update the pip version as well.

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.