emmanueloga / hypergraphdb Goto Github PK
View Code? Open in Web Editor NEWAutomatically exported from code.google.com/p/hypergraphdb
Automatically exported from code.google.com/p/hypergraphdb
It's useful to declare a HGLink constructor like this:
A(HGHandle x, HGHandle y)
instead of
A(HGHandle...targets)
because the first form make it clearer (e.g. in Javadocs) to see what the
semantics of each argument position are. The Java->HGDB type mappers should
recognize this.
Original issue reported on code.google.com by [email protected]
on 10 Apr 2010 at 4:10
Would be nice to automate somewhat the "add a new atom if it doesn't
already exist" pattern in a HyperGraph. A good name for this method
is 'assert'. The method would use a set of heuristics to quickly determine
if the atom is already there:
- search based on
1) the atom's type
2) the atom's target set
3) the atom's value
4) any indices for that type that could be used
- add the atom if it could not be found, or return the existing one. Here,
if we return the HGHandle, there's no way for the caller to know whether a
new atom was added or not and there's no way for them to know whether the
Java object passed to the method refers to the atom. Therefore, it might
be best to return the Java object of the atom instead of the handle which
can be obtained in a subsequent call to getHandle.
But even that doesn't seem soooo elegant. It very much depends on the
caller whether they need the handle or the Java object, and in either case
there's a good chance that they'd have to make a separate API call.
Original issue reported on code.google.com by [email protected]
on 17 Apr 2008 at 8:44
Hypernodes are basically nested graphs. Support of such a thing is
conceptually not so hard to imagine: it must be possible for an atom to
represent a nested graph that establishes a context for graph operations
(both mutation & querying). Two possibilities that I see:
1) An entire separate storage instance (i.e. BDB instance) for each such
separate nested graph. And means to communicate b/w those separate
instances at the HyperGraph-level.
2) Special purpose indexing that comes into play whenever we're within a
nested context. Here, every hypernode has an associated index holding all
atoms in it. Mutation and querying are tied to this index by join it
globally to every query (and then letting the query engine optimize the
actual processing).
Clear the only advantage of (1) are the speedier queries within a hypernode
because there's no extra join. Otherwise, (2) would allow the same atom to
participate in several different nested graphs. Both will need extra care
to manage parent-child relationships b/w the hypernodes. And (2) has the
advantage of speedier global queries that don't care about the hypernode
structure.
So with (2), a hypernode structure can be overlaid relatively easily, but
with the global penalty of this extra join. It is desirable to have this
implemented natively because a representation with links will create too
many atoms (a link b/w the hypernode atom and each of its members). Still,
it would be nice to make this pluggable, say, the same ways transactions
can be plugged and disabled...
Original issue reported on code.google.com by [email protected]
on 20 Jan 2010 at 6:47
Currently atom incidence (a.k.a. incoming) sets are represented as arrays
HGHandle []. This is not very efficient for doing lookup and precludes
using the loaded incidence set of an atom to be used effectively in a
query. And incidence sets are used *a lot* in queries.
So we should change the API while there's still not too much code written
against it to have HyperGraph.getIncidenceSet(atom) return a Set<HGHandle>
or actually a SortedSet<HGHandle> and the cache do the same. The
implementation could use tries if they don't take too much space...
Original issue reported on code.google.com by [email protected]
on 8 Mar 2008 at 4:59
self-explanatory
Original issue reported on code.google.com by [email protected]
on 19 May 2010 at 8:26
As per this discussion:
http://groups.google.com/group/hypergraphdb/browse_thread/thread/9019d65e269c
8650
Currently, generator maintain state to return the current link being
traversed. That simplifies iterating over them, but prevents them from being
reused, for example in a nested loop.
The 'getCurrentLink' method could be removed and the generate method could
return a result set over a Pair<LinkHandle, AtomHandle> like traversals do.
It's obviously a better approach, but it's still possible to change since
generators are not used that much.
Original issue reported on code.google.com by [email protected]
on 10 Apr 2010 at 4:17
Other OO db, e.g. hibernate, do it...so should we.
Original issue reported on code.google.com by [email protected]
on 4 Apr 2010 at 3:29
The current implementation of HGStore uses a single Berkeley DB database
called "data_db" to store both links and raw data. Links are stored as
UUID arrays made up of n consecutive 16 byte blocks as a single value key-
ed by the UUID constituting the link. Raw data is also key-ed by UUID, but
the data is just a byte[] to be interpreted by HG type implementations.
This setup makes it difficult to implement smart remote HGDB access. Each
HG type defines its own layout in low-level storage. So to implement
remote access that works at the atom level, each type implementation would
have to participate in the implementation. We don't want that. We'd like
remote access to be entirely implemented at the storage level and type
implementation wouldn't have to worry about accessing local storage or
remote storage.
Therefore, we need to separate the data DB into two Berkeley databases:
one for links and one for raw data. This way, we could architect the
distributed version of HGDB by having a HGRemoteStore implementation that
is bound to some remote server and that is capable of retrieving a low-
level graph of UUIDs and raw data in bulk. A HGRemoteStore implementation
will only rely on the universal atom layout (made up of [TypeHandle,
ValueHandle, TargetHandle1, ..., TargetHandleN]).
Original issue reported on code.google.com by [email protected]
on 2 Sep 2007 at 6:23
How do we identify peers logically? Perhaps we need a super-peer to manage
identity? In any case, it will be useful.
Original issue reported on code.google.com by [email protected]
on 23 Jun 2008 at 1:29
BerkeleyDB has a limit on the number of locks held at any given time. This
creates a situation when particularly large transactions fail. In general,
it is up to an application to break a transaction into smaller pieces, but
there are cases where the HGDB API itself can lead to failure. For example,
when removing a type atom that has many instances, the transaction fails
with an outofmem because removing each separate instance locks in a
(potentially) different page. Given that there's no locality of reference
for atoms of the same type (like in a relation database where all records
with a given type are in one table = one file), this seems inevitable. We
can try solve the problem by changing the value management so that each
type gets its own DB. But (1) that will open a new can of worms and (2) it
won't solve the long transaction problem in general.
Perhaps, such long transaction should be handled in a special way, always
in a single thread and with a lock on the whole DB environment.
Or have some automatic means to break them down into smaller transactions.
In the case of type removal, perhaps, the type can be marked as "in process
of removal" which would block any usage of it and a background process can
gradually delete all instances, each in a separately committed transaction.
How can this solution be generalized and made into an API?
Original issue reported on code.google.com by [email protected]
on 11 Jan 2008 at 8:36
We have two cases: static inner classes and non-static inner classes.
Static inner classes are easy as this is just a question of scope
(namespacing).
Non-static inner classes are trickier since we need to keep a pointer to
the enclosing object. Also, this needs to represented conceptually in the
proper. From a type theoretic perspective, can we just say that a non-
static inner instance is a "record" like all other Java instances with an
additional slot for the enclosing instance? That's how it's going to be
stored and that's how other language bindings for language with no such
construct will represent it as run-time instance.
Original issue reported on code.google.com by [email protected]
on 16 Sep 2007 at 4:23
What steps will reproduce the problem?
Please check the short example as attachment
What is the expected output? What do you see instead?
Running the example (as attachment) I get the following result:
*************************************************
* INITIALISATION *
*************************************************
- Graph initialised (C:\hgdb)
*************************************************
* [EX1] Get all instances of class Folder *
*************************************************
- Folder Folder 02
- Folder Folder 01
*************************************************
* [EX2] Get instance(s) of class Folder *
* and having name 'Folder 01' *
*************************************************
- Folder Folder 01
*************************************************
* [EX3] Get instance(s) linked to 'Folder 01' *
*************************************************
- Folder Folder 02
*************************************************
* [EX4] Get instance(s) of class Folder *
* connected to 'Folder 01' *
*************************************************
*************************************************
* END *
*************************************************
EX1, EX2, EX3 produce the expected result.
In EX4, I expect to retrieve "Folder 02" like in EX3.
What version of the product are you using? On what operating system?
I use HEAD revision 1152
Please provide any additional information below.
Original issue reported on code.google.com by [email protected]
on 4 Sep 2010 at 11:41
Attachments:
This is the classical limit clause from SQL. Two things here:
1) Ability to position a result set at the Nth entry
2) Ability to retrieve only a partial number of elements.
(2) is very easy
(1) Is much harder because in the general case everything up to the Nth
entry will need to be calculated for a cursor to be positioned there. This
is the same problem as with scanning atoms of a specific type or deleting
all atoms of a specific type: this is going to be inherently
underperformant compared to RDBMs where tables have fixed row length and
it's very easy to calculate the position of the Nth row. But there's
another option: positioning to the Nth entry is generally done after the
first N entries have been examined, when basically the whole result set is
being paged. So it would be possible to cache the N-1th element for random-
access result sets and then just "goto" it when the Nth is requested. This
is going to be part of a more general "query caching".
Original issue reported on code.google.com by [email protected]
on 19 Jan 2010 at 6:51
This is something impossible in an RDBMs, but it should be quite doable in
HyperGraphDB. The idea would be that an atom is indexed based on several
relationships with other atoms. This is the same thing as indexing mini graph
patterns. But the current model actually makes it hard to do because indexing
is triggered upon atom addition. Because an indexer doesn't have control over
the order in which the atom and its relationships are added (or removed), it
will need to do a mini-traversal during each addition. This may be ok - we're
trading off write speed vs. query speed as usual. But...is it possible to
come up with a better model?
Also, what would the meta-data of such an indexer look like so that the query
engine can make good use of it?
Original issue reported on code.google.com by [email protected]
on 14 Mar 2010 at 1:53
BerkeleyDB supports it and probably others do to. This could be implemented
as some optional attributes on the indexers. And the implementation could
fall gracefully to whatever is supported in the underlying storage.
Original issue reported on code.google.com by [email protected]
on 31 Mar 2010 at 4:58
The current one is a bit ad hoc and hard to customize by users. Perhaps
refactor it and use the following:
http://code.google.com/p/google-gson/
Original issue reported on code.google.com by [email protected]
on 21 Oct 2010 at 6:36
When a message holding a large portion of the graph is serialized, this may
break a transaction due to the large amount of locks that need to be held.
Such big messages are rare, but they can break the system. We need the
ability to split them into smaller chunks, which should be implemented as an
API. The application will just need to make sure that either the graph being
transmitted doesn't change in between messages or that it is ok if it does.
Original issue reported on code.google.com by [email protected]
on 23 Jan 2010 at 4:01
Integration with Lucene or Xapian would be nice...first more analysis on the
best way to do it. Perhaps (depending on licensing issues), the best thing is
to just get an already implemented algorithm in the form of code somewhere
and integrated it into the codebase. This will save deployment hassles and
the integration will be easier probably (except for the transactional part).
Original issue reported on code.google.com by [email protected]
on 9 Apr 2010 at 4:17
A test for each HGRandomAccessResult implementation of the goBeforeFirst and
goAfterLast methods needs to be made.
Original issue reported on code.google.com by [email protected]
on 29 Apr 2010 at 7:18
Only atoms per se are indexed by their type, but not other values that may be
part of some record (or another complex structure). So when a type gets
deleted, those other values remain with dangling type handle references and
the atoms that contains cannot be loaded.
Original issue reported on code.google.com by [email protected]
on 21 Jun 2009 at 10:07
The Message class is currently being serialized and deserialized together
with everything else. We need to implement messages as JSON formatted,
human readable, programming language and platform/JVM agnostic text.
JSON is powerful enough to represent queries etc.
While the wire representatio is JSON, we need to have a general data
structure for representing messages that is independent of JSON while
still following the same formal model of nested structures with named
slots, primitive values and lists. The same data structure could be used
for s-expressions if we decide to switch the format for whatever reason
(e.g. interact with other "agent-oriented" frameworks).
Initially, the top-level structure need to be specified. What are the
available attributes, what is their precise meaning. Which ones are
required and which ones are optional.
Original issue reported on code.google.com by [email protected]
on 22 Jun 2008 at 5:23
What steps will reproduce the problem?
1. Provide dependency in Maven to a library (jar)
2. Create instance of the HyperGraph: graph = new HyperGraph(databaseLocation);
3. Add class instance from this *.jar: graph.add(new AddClass(null, null,
null));
What is the expected output? What do you see instead?
We expected to upload instance of from *.jar into newly created HyperGraph database. But the following error appears:
org.hypergraphdb.HGException: Unable to create HyperGraph type for class
menta.model.howto.AddClass
at org.hypergraphdb.HyperGraph.add(HyperGraph.java:647)
at org.hypergraphdb.HyperGraph.add(HyperGraph.java:611)
at hypergrpahDB.JavaTest.testJavaClass(JavaTest.java:15)
at samples.HGDBTest.HGDBCreateSample(HGDBTest.scala:20)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:76)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
at org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:59)
at org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.executeTestSet(AbstractDirecto
ryTestSuite.java:120)
at org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.execute(AbstractDirectoryTestS
uite.java:103)
at org.apache.maven.surefire.Surefire.run(Surefire.java:169)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.maven.surefire.booter.SurefireBooter.runSuitesInProcess(SurefireBooter.java:35
0)
at org.apache.maven.surefire.booter.SurefireBooter.main(SurefireBooter.java:1021)
What version of the product are you using? On what operating system?
HypergaphDB 1.1 on Windows XP. I think it's not important.
We use it via Maven.
Please provide any additional information below.
We tried to use Scala instances and Java insnatces. There's one strange thing.
When we extend the class, that can't be added to the HGDB in the same source
file it seems to go well (please, see below), but when we tried to do the same
with other classes it wasn't successful.
You can find the sources here:
http://code.google.com/p/menta/source/browse/#hg%2Ftest%2FHypergraphDBTest
This test class creates the DB:
http://code.google.com/p/menta/source/browse/test/HypergraphDBTest/src/test/scal
a/samples/HGDBTest.scala
Here we try to add the class instance (Java, unsuccessful):
http://code.google.com/p/menta/source/browse/test/HypergraphDBTest/src/main/java
/hypergrpahDB/JavaTest.java
Here we try to add the class instance (Scala, successful, see line 90):
http://code.google.com/p/menta/source/browse/test/HypergraphDBTest/src/main/scal
a/hypergrpahDB/HGDBHelper.scala
PS: if you need more simplificated example, I can provide it. Be free to write
me.
Original issue reported on code.google.com by [email protected]
on 27 Jan 2011 at 11:33
This will probably work only for homogeneous result sets - same type or same
super-type where that super-type is comparable or sorting is done by a
comparable property of the super-type. Only when there's an index already by
the value that's being sorted won't there be a need to load everything in
memory and sort there. For very-large data set, we need disk-based mergesort
(it's the most efficient, last I checked and not hard to implement at all).
Original issue reported on code.google.com by [email protected]
on 19 Jan 2010 at 6:28
The transaction there doesn't loop and fails in case of deadlock detection.
This should be fixed carefully since there are some state variables to be
rolled back.
Original issue reported on code.google.com by [email protected]
on 15 Mar 2010 at 3:15
An idea to explore for the HyperGraphDB as an API is the implementation of
atoms that are in RAM only and that are never written to the database.
This would affect all parts of the implementation and it is doable, at a
certain cost. Such atoms:
1) Will only reside in the HGDB cache as frozen atoms.
2) Will have proper type handles associated with them, but not real value
handles pointing the the storage.
3) Will be indexed as all other atoms through a careful implementation of
in-memory indices that are automatically intersected with the DB indices.
4) Will be able to link to persistent atoms, but not be pointed to by
them. One difficulty here is the management of the incidence sets of
persistent atoms: they could contain RAM-only atoms. So any atom that is
pointed by a RAM only atom must have its incidence set frozen in the cache
and used exclusively in queries.
This is all very doable. The main overhead is in the management of indices
and the intersection b/w BerkeleyDB indices with HGDB in-memory ones. From
the point of view of querying this is an implementation detail.
A huge downside from an architectural perspective is the possibility of
application code referring to RAM atoms from persistent atoms. Nothing
prevents normal application logic from getting a handle to a temp atom and
persisting without knowing that it's not going to be available the next
time. This might be ok. After all, nothing prevents an application from
keeping a reference to an atom that has been removed from permanent
storage. As long as HGDB itself avoids such problem, there's no reason to
cut functionality because it might be misused.
Original issue reported on code.google.com by [email protected]
on 19 Mar 2008 at 3:45
HyperGraphDB already supports transactions, thanks to the BerkeleyDB
facilities. However, none of the caching data structures participates in a
transaction. At attempt was only made during the creation of new HGDB
types in HGTypeSystem to backtrack insertions in the classToAtom map when
a transaction fails. However, similar subtle problems may arise due to
lack of transactionality if other caches, most notably the main
HGAtomCache.
We need to develop a general framework to hook in-memory data structures
to a database transaction. This could be useful to applications using HGDB
as well. The goal is, as always!, simplicity and minimal performance
impact.
Original issue reported on code.google.com by [email protected]
on 5 Aug 2007 at 10:09
What steps will reproduce the problem?
1. Do a clean checkout of the current trunk.
2. Run ant
3. Observe compile failure
What is the expected output? What do you see instead?
I'd expect it to compile. :-) Instead I see
[javac] Compiling 265 source files to
/home/david/hypergraphdb-read-only/core/build
[javac]
/home/david/hypergraphdb-read-only/core/src/java/org/hypergraphdb/HyperGraph.jav
a:16:
cannot find symbol
[javac] symbol : class SimpleCache
[javac] location: package org.hypergraphdb.cache
[javac] import org.hypergraphdb.cache.SimpleCache;
[javac] ^
[javac]
/home/david/hypergraphdb-read-only/core/src/java/org/hypergraphdb/HyperGraph.jav
a:240:
cannot find symbol
[javac] symbol : class SimpleCache
[javac] location: class org.hypergraphdb.HyperGraph
[javac] SimpleCache<HGPersistentHandle, IncidenceSet>
incidenceCache = new
[javac] ^
[javac]
/home/david/hypergraphdb-read-only/core/src/java/org/hypergraphdb/HyperGraph.jav
a:241:
cannot find symbol
[javac] symbol : class SimpleCache
[javac] location: class org.hypergraphdb.HyperGraph
[javac] SimpleCache<HGPersistentHandle,
IncidenceSet>(); // (0.9f, 0.3f);
[javac]
^
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.
[javac] 3 errors
What version of the product are you using? On what operating system?
Current trunk on linux.
Original issue reported on code.google.com by [email protected]
on 27 Aug 2008 at 10:53
Design and develop a protocol that in effect implements an exact
replication of a set of atoms as defined by a querying predicate (a
HGAtomPredicate). The use case for is stated simply: given a group of
peers, we want to make sure that all atoms satisfying a given condition as
replicated everywhere and available locally. Here, we'd like to
declaratively state the condition somewhere and never worry about it
again. HyperGraph events can be used to track add/remove of atoms in the
local DB.
Original issue reported on code.google.com by [email protected]
on 23 Jun 2008 at 1:26
Right now, incidence sets are maintained as HGHandle [] in the cache.
Instead, we should have them as fast-lookup handle sets. Some may become
quite big and querying + traversal algorithms could benefit from a
speedier, in-memory representation.
Original issue reported on code.google.com by [email protected]
on 14 Dec 2007 at 12:48
The 'tasks' map in JXTAPeerInterface is ever growing. Seems like the
logical thing to do is to remove a task from there as soon as it reaches an
"end" state.
Original issue reported on code.google.com by [email protected]
on 10 Feb 2009 at 6:51
This would be a very fundamental change, to be done only if there's a
serious need for it. Very early one, we made the decision that targets set
of atoms are to be stored as a "blob" sequence of of handles. This
implicitly makes all links "oriented" if anyone cares about the order. The
assumption there is that most of the time one does care about the order
since links represent relationships most of the time and it's more
efficient to interpret the semantics of a relationship by using the
positions of its arguments. However, mathematically edges in a hypergraph
are really sets that can get very large. With the edges pointing to edges
generalization, we have a natural symmetry b/w incidence sets and target
sets that is broken in the current architecture. For example, one cannot
easily intersect two large target sets. And that's fine. But I have a vague
intuition that some interesting graph algorithms might come out if we
restore this symmetry in the implementation (think of the dual of a
hypergraph where we simply switch target and incidence sets so that every
node becomes a link and every link becomes a node).
To implement that, we'll need a new interface to represent such links and
we'll need to modify the code carefully to account for the distinction b/w
the current HGLinks and that new interface. The basic graph operations that
manage incidence sets implicitly need to be preserved (that's where most of
the work will go). The cache will need some adjustments to cache large
target sets, the query system will need serious modifs and the HGAtomType
interface will unfortunately need to be changed. So it's no small business.
But I would hate the hold back the evolution of the software because of a
commitment to some interfaces. The cost is not always so high. For example,
HGAtomType might simply have another 'make' method added to it. Atoms of
this new kind of hyper-edge might have a special system attribute that
marks them as having a "target set" associated (sounds hacky, but it's ok
since those system attributes are read and managed anyway). There are
always ways, given enough motivation...
Original issue reported on code.google.com by [email protected]
on 23 Jan 2010 at 3:58
Java beans are represented as record structures where the slots are part of
the value of an atom.
An alternative is to represent Java beans as link b/w slots. This is not
necessarily better because it precludes a bean from being a link
independently from its record structure. Or simply the slot values may not
have to become atoms and be linked etc. It all depends on the
representation needs. But we should have a JavaObjectMapper that creates
Java bean types based on that idea. The mapper could reverse to the current
mapping is the Java class implements HGLink.
Also, we might want another Java annotation, perhaps at the class level
that explicitly asks for this (e.g. HGRecordLink) and/or at the field level
(e.g. HGLinkTarget), so as to allow maximum flexibility in how beans are
represented. The HGLinkTarget annotation could work with classes
implementing HGLink to selectively represent some of the fields as link
targets and others as part of the record value structure.
Original issue reported on code.google.com by [email protected]
on 13 Jan 2010 at 3:19
This takes an extra low-level K/V access to read/write attribute whenever an
atom is read or stored in the DB. Since many application don't care about
those flags (especially since right now they are not really working), this
should be all optional in the DB configuration.
Original issue reported on code.google.com by [email protected]
on 19 May 2010 at 8:25
Since HGDB apps are able to add new primitive types or type constructors to a
hypergraphdb instance, it should be possible to remove them as well.
Currently, HGDB throws an exception when an attempt is made to remove an atom
whose type is Top.
Original issue reported on code.google.com by [email protected]
on 4 Jul 2009 at 9:50
Given an atom x that holds the sole hard reference to some other atom y,
when x is updated with graph.update(x), y disappears as a side-effect. The
reason is that update is in fact a "replace" operation:
1) Delete stored value of x.
2) Store current run-time value of x.
But during step (1), the reference count of y goes to 0, and it is deleted
from the graph. Step (2) store the handle to y correctly, but there's no
more y after that.
A nasty side-effect that we need to find a way to get around.
Original issue reported on code.google.com by [email protected]
on 28 Feb 2008 at 6:19
It would be useful to specify multiple 'start' atoms for a traversal.
Right now to find all atoms reachable for initial atom set, on needs to do
each traversal separate and intersect the results which is ok, but not
optimal.
Original issue reported on code.google.com by [email protected]
on 28 Apr 2008 at 6:33
See: http://datadraw.sourceforge.net
Original issue reported on code.google.com by [email protected]
on 2 Jan 2010 at 5:40
This should be a special deployment that doesn't need a backing HGDB and
that returns a 'notUnderstand' message to everything thrown at it.
Original issue reported on code.google.com by [email protected]
on 20 Mar 2009 at 4:18
What version of the product are you using? On what operating system?
1.1alpha on windows 64 bit with Java 64bit
Please provide any additional information below.
The Berkeley Db utilities for 64 bit that are included with the 1.1 alpha are
for version 4.7 and not the one with the version 5.0.21 that is included in the
folder.
Original issue reported on code.google.com by alpic80
on 17 Sep 2010 at 12:07
this should complete the MVCC implementation....
Original issue reported on code.google.com by [email protected]
on 12 Nov 2010 at 1:10
SerializableType uses standard Java serialization to store objects as blobs
in HGDB. For that, it relies on ObjectInputStream/ObjectOutputStream. But
the latter are using the class loader the SerilizableType.class itself. So
when HGDB is loaded by a different class loader then the classes of the
atoms, deserialization fails. There's no way to make ObjectInputStream use
the current thread classloader. This is unfortunate, but it means that we
pretty much need to implement our own custom serialization. Or force all
atoms to be saved in HGDB to be default-constructible...
Original issue reported on code.google.com by [email protected]
on 17 Feb 2009 at 5:30
[deleted issue]
We need to do this as it is done for the topic maps implementation. The
HGApplication essentially allows you to install HG types with predefined
handles and provides lifecycle management for updates etc. But the most
important part is the ability to have an installation step where indices,
types etc. are created once and there's no need to check every time for
their existence.
Original issue reported on code.google.com by [email protected]
on 23 Nov 2007 at 4:45
BerkeleyDB returns a failed cursor on a range query where the key is
larger than everything in the DB. Also, it doesn't return a default byte[]
comparator when no custom one was specified....anyway this method needs
more thorough testing...
Original issue reported on code.google.com by [email protected]
on 31 Mar 2010 at 4:56
What steps will reproduce the problem?
1. Download HGDB 1.0
2. Put all jars in classpath
3. Call new HyperGraph(databaseLocation) where databaseLocation has not
been created yet.
What is the expected output?
Successful completion of constructor
What do you see instead?
Here is the stacktrace:
org.hypergraphdb.HGException: java.lang.NoSuchMethodError:
org.hypergraphdb.storage.LinkBinding.objectToEntry(Ljava/lang/Object;Lcom/sleepy
cat/db/DatabaseEntry;)V
at org.hypergraphdb.HyperGraph.open(HyperGraph.java:370)
at org.hypergraphdb.HyperGraph.open(HyperGraph.java:208)
at org.hypergraphdb.HyperGraph.<init>(HyperGraph.java:195)
at com.iovation.gas.util.hgdb.Loader.load(Loader.java:23)
at com.iovation.gas.util.hgdb.Loader.main(Loader.java:114)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.jav
a:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:110)
Caused by: java.lang.NoSuchMethodError:
org.hypergraphdb.storage.LinkBinding.objectToEntry(Ljava/lang/Object;Lcom/sleepy
cat/db/DatabaseEntry;)V
at org.hypergraphdb.HGStore.store(HGStore.java:240)
at
org.hypergraphdb.HGTypeSystem.addPrimitiveTypeToStore(HGTypeSystem.java:150)
at org.hypergraphdb.HGTypeSystem.bootstrap(HGTypeSystem.java:186)
at org.hypergraphdb.HyperGraph.open(HyperGraph.java:332)
... 9 more
What version of the product are you using? On what operating system?
HypergrahDB version: 1.0
OS: Windows XP
Sleepycat: je-4.0.103
Please provide any additional information below.
I looked in the je-4.0.103 jar, and there is no com.sleepycat.db package.
Original issue reported on code.google.com by [email protected]
on 11 May 2010 at 1:21
What steps will reproduce the problem?
1. Create new Niche in Scriba
2. Open New Notebook and write the following:
import com.kobrix.notebook.*;
AppConfig .getInstance().setProperty(AppConfig.SPACES_PER_TAB, 5);
This will force AppConfig to be unloaded, because it is changed.
3.Exit Scriba.
What is the expected output? What do you see instead?
You should see the following stack trace:
org.hypergraphdb.HGException: Problem while unloading atom
com.kobrix.notebook.AppConfig@c759f5 of type com.kobrix.notebook.AppConfig
at org.hypergraphdb.HyperGraph.unloadAtom(HyperGraph.java:1527)
at org.hypergraphdb.HyperGraph.access$1200(HyperGraph.java:82)
at org.hypergraphdb.HyperGraph$10.handle(HyperGraph.java:1934)
at org.hypergraphdb.HyperGraph$10.handle(HyperGraph.java:1932)
at org.hypergraphdb.event.HGEventManager.dispatch(HGEventManager.java:57)
at org.hypergraphdb.cache.WeakRefAtomCache.close(WeakRefAtomCache.java:249)
at org.hypergraphdb.HyperGraph.close(HyperGraph.java:239)
at com.kobrix.scriba.boot.Main$1.run(Main.java:33)
at java.lang.Thread.run(Unknown Source)
Caused by: java.lang.NullPointerException
at org.hypergraphdb.HyperGraph.getPersistentHandle(HyperGraph.java:308)
at
org.hypergraphdb.HGTypeSystem.defineNewJavaTypeTransaction(HGTypeSystem.java:345
)
at org.hypergraphdb.HGTypeSystem.defineNewJavaType(HGTypeSystem.java:283)
at org.hypergraphdb.HGTypeSystem.getTypeHandle(HGTypeSystem.java:690)
at
org.hypergraphdb.type.JavaTypeFactory.defineHGType(JavaTypeFactory.java:225)
at
org.hypergraphdb.HGTypeSystem.defineNewJavaTypeTransaction(HGTypeSystem.java:313
)
at org.hypergraphdb.HGTypeSystem.defineNewJavaType(HGTypeSystem.java:283)
at org.hypergraphdb.HGTypeSystem.getTypeHandle(HGTypeSystem.java:690)
at org.hypergraphdb.type.CollectionType.store(CollectionType.java:86)
at org.hypergraphdb.type.TypeUtils.storeValue(TypeUtils.java:215)
at org.hypergraphdb.type.MapType.store(MapType.java:82)
at org.hypergraphdb.type.TypeUtils.storeValue(TypeUtils.java:215)
at org.hypergraphdb.type.RecordType.store(RecordType.java:213)
at org.hypergraphdb.type.JavaBeanBinding.store(JavaBeanBinding.java:111)
at org.hypergraphdb.HyperGraph$9.call(HyperGraph.java:1800)
at
org.hypergraphdb.transaction.HGTransactionManager.transact(HGTransactionManager.
java:206)
at org.hypergraphdb.HyperGraph.replaceInternal(HyperGraph.java:1745)
at org.hypergraphdb.HyperGraph.replace(HyperGraph.java:974)
at org.hypergraphdb.HyperGraph.replace(HyperGraph.java:921)
at org.hypergraphdb.HyperGraph$8.call(HyperGraph.java:1505)
at
org.hypergraphdb.transaction.HGTransactionManager.transact(HGTransactionManager.
java:206)
at org.hypergraphdb.HyperGraph.unloadAtom(HyperGraph.java:1484)
... 8 more
Please use labels and text to provide additional information.
Original issue reported on code.google.com by [email protected]
on 31 Aug 2007 at 5:34
This would improve cache performance which right now needs to synchronize on
the WeakHashMap used for the atom->HGHandle mapping. This is essentially a
blend b/w the standard WeakHashMap implementation and the standard
ConcurrentHashMap. It would need to be seriously tested of course...
Original issue reported on code.google.com by [email protected]
on 9 Aug 2010 at 9:01
Currently all metadata for user-define indices (such as the base type, and
dimension path) are recorded in a text file in the database location
directory. This file is read upon startup and written upon shutdown.
We should instead store such metadata as published HGDB atoms. This will
have several advantages:
1) It will be easily viewable by generic HGDB viewing tools.
2) It will be easily queryable like every other atom, without a need for a
special purpose API.
3) It will open the door for a more flexible index management, because the
needed metadata will not be constrained by the simple text file format
storage, each index will be HGHandle-identifiable etc. This will easily
allow for a custom pluggable index management, for the courageous.
4) Currently, all indices need to be instantiated at startup time. This
forces the startup class loader to have all relevant classes available.
And this plays badly with environments such as Scriba where certain
indices are ever used only in certain circumstances and within their own
class loaders.
Original issue reported on code.google.com by [email protected]
on 6 Aug 2007 at 12:41
This would enable the automatic (type-based) creation of indices from a
given target of a link (identified by its position within the link) to
another target within the same link.
Original issue reported on code.google.com by [email protected]
on 15 Dec 2007 at 1:10
A normal, "flat" traversal is based on a standard, flat graphs: each node has
a set of neighbors that are directly linked to it. When links contain more
than two nodes, this just yields more neighbors per link, but the structure
is still flat. However, when links contain other links, the structure is no
longer. A traversal might need to follow that links that it visits.
Conversely, a traversal might need to visit the links that it follows to
discover neighbors. What does breadth-first/depth-first mean in this context
is not very clear to me because we have more dimensions (i.e. more ways to go
at each step).
An API needs to be worked out for defining hyper-traversals and see what kind
of algorithms can be build on top of them.
Original issue reported on code.google.com by [email protected]
on 4 Jul 2009 at 5:22
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.