gpulost / lepl Goto Github PK
View Code? Open in Web Editor NEWAutomatically exported from code.google.com/p/lepl
License: Other
Automatically exported from code.google.com/p/lepl
License: Other
The licence link (from http://code.google.com/p/lepl) is an auto-generated
offset. Should be possible to make RST used a fixed value ("licence")?
Original issue reported on code.google.com by [email protected]
on 31 Jan 2009 at 10:46
This happens for python 2.6 (I don't with python 3.0)
Original issue reported on code.google.com by [email protected]
on 28 Apr 2009 at 6:35
my understanding is that the following code should parse text from a file
-----------
from lepl import *
v = Token('[a-z]+') & Token(' +')
f = open('text.txt')
v.parse_file(f)
-----------
but it fails with the following error
----------
Traceback (most recent call last):
File "t_string.py", line 6, in <module>
v.parse_file(f)
File "/HOME/.local/lib/python3.2/site-packages/LEPL-5.0.0-py3.2.egg/lepl/core/config.py", line 825, in parse_file
return self.get_parse_file()(file_, **kargs)
File "/HOME/.local/lib/python3.2/site-packages/LEPL-5.0.0-py3.2.egg/lepl/core/parser.py", line 257, in single
return next(raw(arg, **kargs))[0]
File "/HOME/.local/lib/python3.2/site-packages/LEPL-5.0.0-py3.2.egg/lepl/core/parser.py", line 146, in trampoline
value = next(value.generator)
File "/HOME/.local/lib/python3.2/site-packages/LEPL-5.0.0-py3.2.egg/lepl/lexer/lexer.py", line 133, in _match
(max, clean_stream) = s_new_max(in_stream)
File "/HOME/.local/lib/python3.2/site-packages/LEPL-5.0.0-py3.2.egg/lepl/stream/core.py", line 283, in <lambda>
s_new_max = lambda stream: stream[1].new_max(stream[0])
File "/HOME/.local/lib/python3.2/site-packages/LEPL-5.0.0-py3.2.egg/lepl/stream/core.py", line 223, in new_max
raise NotImplementedError
----------
Original issue reported on code.google.com by [email protected]
on 26 Dec 2011 at 6:15
Join() not in __all__ list of lepl/__init__.py
Original issue reported on code.google.com by [email protected]
on 30 Nov 2009 at 12:41
I'm using this issue to collect together various possible changes related
regexps and line-aware parsing.
I don't promise to do everything, of course, but at least I won't miss
things by accident if they are listed here.
Original issue reported on code.google.com by [email protected]
on 23 Nov 2009 at 11:46
An empty list of lepl.List or a subclass cannot be printed (or converted to
string):
>>> l = lepl.List()
>>> print l
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/exports/home/wagnerflo/.local/lib/python2.6/site-packages/LEPL-4.3.3-py2.6.egg/lepl/support/list.py", line 55, in __str__
return sexpr_to_tree(self)
File "/exports/home/wagnerflo/.local/lib/python2.6/site-packages/LEPL-4.3.3-py2.6.egg/lepl/support/list.py", line 165, in sexpr_to_tree
return '\n'.join(fold(list_)('', ''))
File "/exports/home/wagnerflo/.local/lib/python2.6/site-packages/LEPL-4.3.3-py2.6.egg/lepl/support/list.py", line 163, in <lambda>
return lambda first, rest: join(list(fun(first, rest)))
File "/exports/home/wagnerflo/.local/lib/python2.6/site-packages/LEPL-4.3.3-py2.6.egg/lepl/support/list.py", line 162, in fun
yield force[-1](rest + ' `- ', rest + ' ')
IndexError: list index out of range
A patch is attached which seems to fix the issue.
Original issue reported on code.google.com by [email protected]
on 28 Nov 2010 at 2:29
Attachments:
Probably need a "support" chapter (or section under Download and
Installation?).
Original issue reported on code.google.com by [email protected]
on 31 Jan 2009 at 10:47
from unittest import TestCase
from lepl import *
class LeftRecursiveTest(TestCase):
def test_left(self):
item = Delayed()
item += item[1:] | Any() | ~Lookahead('\\')
expr = item[:] & Drop(Eos())
expr.parse_string('a(bc)*d')
Running the code above enters an apparently infinite loop. This is
probably related to Lookup() and the automatica detection of left recursion.
Original issue reported on code.google.com by [email protected]
on 15 Mar 2009 at 5:49
See http://groups.google.com/group/lepl/msg/23734c4bbdaf45f2 and related
thread.
The problem occurs when a graph is cloned starting from a point which is
not the root of the tree (if Delayed nodes are broken to remove cycles).
Original issue reported on code.google.com by [email protected]
on 4 Sep 2009 at 12:50
This is a broad, long-term enhancement - error handling and trace output
are still pretty much as they were in the first few releases and could
really do with some love and attention.
Original issue reported on code.google.com by [email protected]
on 29 Apr 2009 at 1:40
lepl.String() "filters out" empty strings. Example:
>>> import lepl
>>> lepl.__version__
'5.0.1'
>>> lepl.String().parse('""')
[]
Here's what I was expecting:
>>> lepl.String().parse('""')
['']
I'm getting around this by doing the following:
>>> (lepl.String() > (lambda args:args and args[0] or '')).parse('""')
['']
But really I think this should be the default behavior, or if there is good
reason not to then the docs should be updated to reflect this behavior. It took
me a while to figure out what was going wrong because this “feature” (if it
is not in fact a bug) is very counter-intuitive.
Original issue reported on code.google.com by [email protected]
on 16 Mar 2012 at 6:06
LEPL has ad-hoc, incomplete, self-implemented support for graphs.
It would probably be better to use http://code.google.com/p/python-graph/
or similar. One advantage is pretty pictures :)
Original issue reported on code.google.com by [email protected]
on 4 May 2009 at 3:38
The example at
http://www.acooke.org/lepl/offside.html#example
has line
>>> program.config.blocks(block_policy=2)
I believe it should be
>>> program.config.lines(block_policy=2)
or even
>>> program.config.blocks(block_policy=explicit())
?
Original issue reported on code.google.com by [email protected]
on 12 Dec 2011 at 11:41
I just wrote an antlr grammar for lepl (I adapted my grammar previously written
for pyparsing), I wrote it for fun and to learn lepl, but I suppose that it
could be useful for these reasons :
it makes lepl capable to parse an ebnf grammar, that could be useful to create a lepl grammar repository / serialization format and to provide a well known syntax
lepl could parse the grammars into antlr repositories (java, c, javascript, sql , ...)
antlr grammar is top-down, so now lepl could be at the same time both bottom-up and top-down, without the need to write Delayed()
It could be included into some method "parse_antlr" or "parse_ebnf"... anyway,
it is my way to contribute :-)
Original issue reported on code.google.com by [email protected]
on 18 Mar 2011 at 9:42
Attachments:
I think it should be possible to port to 2.5 before doing th efirst
non-beta release. The ABC will need to be made optional, and with
statements must be imported from __future__, but apart from that I don't
expect any major issues.
Original issue reported on code.google.com by [email protected]
on 31 Jan 2009 at 10:42
What steps will reproduce the problem?
1. Node('abc') == Node('abc')
True
2. Node('abc') != Node('abc')
True
3. Node(Node('abc')) == Node(Node('abc'))
False
What is the expected output? What do you see instead?
True, False, True
True, True, False
What version of the product are you using? On what operating system?
3.3.3
Mac OS 10.5.8
Python 2.6.4
Please provide any additional information below.
Original issue reported on code.google.com by [email protected]
on 12 Dec 2009 at 3:04
Please run the attached program. It causes lepl to go into infinite loop.
This is due to line containing "vwx yz" - indent it by 4 more spaces and it
will work.
But if the grammar is incorrectly defined, then I would expect an error.
Original issue reported on code.google.com by [email protected]
on 12 Dec 2011 at 7:56
Attachments:
See Kelbt
Original issue reported on code.google.com by [email protected]
on 2 Oct 2009 at 10:29
What steps will reproduce the problem?
>> print String().parse('"line1\\\nline2"')
>> ['line1\\\nline2']
What is the expected output? What do you see instead?
['line1line2']
What version of the product are you using? On what operating system?
Ubuntu 9.04/python2.6.2/lepl3.3
Thanks!
Original issue reported on code.google.com by [email protected]
on 15 Oct 2009 at 6:53
This is pretty obscure, but the "*" operator (which lets you apply the
current results to a function that takes a matching number of arguments) is
broken.
It will be fixed in 3.0
Original issue reported on code.google.com by [email protected]
on 29 May 2009 at 9:57
Just wrote this in reply to someone asking about the offside rule, thought
it might as well go into the issue tracker:
yes, you are right, this is missing :o)
i have just added the lexer and i was going to add support for the offside
rule in the next release. the way i was thinking of doing it is by
allowing the lexer to take an arbitrary generator function whose argument
is the token stream. the result of this function would be used as the
actual stream supplied to the tokens.
given that, i would then write such a function that contains the internal
state of the current indentation level and converts spaces and newlines
into tokens that indicate whether the indentation level has changed or
not.
so the function would pass through most tokens, but consume "leading
space" tokens and "newline" tokens, and add "indent" and "deindent"
tokens.
i don't know if that makes complete sense - i have not looked in detail at
the problem, but broadly that was my "plan of attack".
i am not sure when the next release will be, unfortunately, as i am very
busy with work (deadline) at the moment, and my motherboard is dying
(which means free hours will be spent rebuilding the computer...). i
would guess it would be some time in june.
if you want to implement something yourself, what i suggest above is the
best idea i have had so far. if you do implement something yourself, i
would be interested to know what you did (you can even contribute code if
you want! :o)
Original issue reported on code.google.com by [email protected]
on 7 May 2009 at 9:48
The detection of loops (and so the correct use of LMemo/RMemo) is broken in
3.2. This is because the new cloning no longer gives special meaning to
Delayed instances, but the loop detection code still relied on the old
behaviour.
The workaround for now is to specify the following configuration instead of
the default value):
Configuration(
rewriters=[flatten, compose_transforms, lexer_rewriter(),
optimize_or(True), memoize(LMemo)],
monitors=[TraceResults(False)])
However, I would only suggest doing this if you have left-recursive grammar
and/or see the error:
TypeError: 'NoneType' object is not iterable
Andrew
Original issue reported on code.google.com by [email protected]
on 6 Sep 2009 at 10:43
A regexp that uses an inverted range (square brackets starting with a
caret) will give misleading results. For example [^9-0] may match all digits.
The problem is that the parser for a regexp does not order the character
intervals before sending them to alphabet.invert. This can be fixed by
changing the definition of "invert" in make_str_parser (lepl.regexp.str) to
invert = lambda x: alphabet.invert(Character(x, alphabet))
since Character() does sort the intervals.
The next release of LEPL (2.4) will include this fix.
Andrew
Original issue reported on code.google.com by [email protected]
on 19 Apr 2009 at 2:32
the code below
-----
from lepl import *
v = Token('[a-z]+') & Token(' +') & String()
v.parse('aaa "aaa"')
-----
gives the error
------
lepl.lexer.support.LexerError: The grammar contains a mix of Tokens and
non-Token matchers at the top level. If Tokens are used then non-token
matchers that consume input must only appear "inside" Tokens. The non-Token
matchers include: Any(None); Literal('"'); Lookahead(Literal, True);
Literal('"'); Literal('"'); Literal('\\').
------
trying to tokenize string fails as well
-------
from lepl import *
v = Token('[a-z]+') & Token(' +') & Token(String())
v.parse('aaa "aaa"')
-------
as the code above gives
-------
lepl.lexer.support.LexerError: A Token was specified with a matcher, but the
matcher could not be converted to a regular expression: And(NfaRegexp,
Transform, NfaRegexp)
--------
Original issue reported on code.google.com by [email protected]
on 26 Dec 2011 at 6:10
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.