onzag / itemize Goto Github PK
View Code? Open in Web Editor NEWEasy, Fast, Scalable, Reliable; ReactJS based FullStack engine.
Home Page: https://onzasystems.com
License: Other
Easy, Fast, Scalable, Reliable; ReactJS based FullStack engine.
Home Page: https://onzasystems.com
License: Other
Currently search works this way.
There are flaws with this method.
The search should work instead
This should fix the flaws, some things to consider:
This will be part of the way to monetize Itemize, providing a service to add payment support for safe monetary transactions; it should be easy as just a simple definition and an API key; the service should be linked to a payment provider.
Proposing the introduction of the tags type, which represents itself as an array of text type, and allows for tagging of the content with special predefined tags.
The valid tags are given via the values, and expect to get translations each.
Custom tags might be possible to be added, and will affect how search is done, whether it is a standard tag or a custom tag; the problem with custom tags is translation, custom tags can't be translated or have i18n content.
Right now ids follow a standard auto increment mechanism in the database, this is not optimal as it allows to easily guess ids as well as to easily find out the structure of the database and override what could be protected namespaces; such as in the case of fragments.
Two solutions come to find.
The second solution proves to be the most flexible as the id type being a string will prove to be incredibly effective as well to allow for custom identifiers to be set, however it would require a major refactor where every single id in every single file would need to be changed to be string/text.
The first would be easier to implement since it would require a simple change to the way the index works as well as how values are created; but it will not allow for custom ids.
I feel like gravitating towards the first because it's easier and for the end user there is no much difference, however this creates a coding situation where numeric values hold some meaning to them; but this is a hurdle to the programmer, we can probably make the first 1000 values to be protected and unable to be held by an index.
Right now copy and paste doesn't work quite right as soon as images/files are added into the mix, dropping images/files doesn't work as well.
Currently there's no much testing that happens with puppeteer, when the testing mode is enabled, there are a couple of things that need to be established.
In certain unlikely scenarios mutation can merge via the gql query merger, and create a request that will be rejected by the server because the sheer amount of files that have been passed.
In order to fix this bug, we should ensure that the file limitations are not exceeded by the merger.
This should be as easy as counting the files, their size, and ensuring the limit is not reached to toggle the boolean merge flag.
The Standard CMS that comes in itemize is just not very good.
In some situations the socket is unidentified when registering, there seems to be a timing issue that does not allow for the token to be registered via the handshake on the socket io remote protocol.
It can be easily replicated by forcefully making the identify request wait for 3 seconds before replying, the client still sends the registering requests despite not having received confirmation from the server that is allowed to do so, causing an error.
Once #10 is fixed a by-property search domain might be added to support integer and string identifier types.
Currently the domains that exist are:
The by-property domain would allow to choose a property and a property value to act as domain and event listeners will be fired and registered based on that property, the cache worker in the client side should also be able to handle these results in this domain.
If a cluster manager loses connection of its redis clients to the global cache it should wipe the cache and remove all the listeners as it doesn't know whether during the outage it lost events and as such the cache suddenly becomes invalid.
So all the cache should be marked as invalid, no feedback to check, simply blow it; and add a log message as error type because this shouldn't have happened to start with.
Knex should have no issue with this as during an outage will cause the endpoints to crash giving INTERNAL_SERVER_ERROR and once it recovers any knex related functionality should be mantained, no such case is with the local cluster cache, so the cache should be blown.
Remembering that sometimes the cache is the same, we should ensure not to blow global cache variables; now this should mean that it shouldn't even happen because then there's a single cluster and the global is the same, but just to keep consistency.
Flagging and modding are already properties in the system, even modRoleAccess is there, but there's nothing that can be done with these properties.
All the necessary properties are there, it's just not implemented, there are even commented out lines where flagging should take part in.
The search for the modding behaviour needs to be added nevertheless, both in search and traditional search for these items that need moderation, flagged, and blocked as well; there need to be args for them, that moderators can use.
We should provide archival support, the way it should be handled.
Archived elements are simply downprioritized.
Archival should mostly be necessary when we are talking about large data, we shouldn't endorse the use of archival for small units of data.
Check https://www.postgresql.org/docs/9.3/ddl-partitioning.html to see how it might be implemented, this might affect the way that the schema is created as well and build, so the build-database might need to change to enable the archiving support.
We need a way to add third party services via the customization attributes of the config.
Optimally they would also be services like the mail service, storage service, etc... and they should be added to the service configuration file and consume information from the config custom attributes of both sensitive and standard config.
Currently there are language keywords that are added in each item definition for search mode.
These keywords are not really used anywhere.
Module based search should allow filtering by item definition types given the usage of these keywords and consume these keywords.
Users should be allowed to unsubscribe from emails from the email itself, and a rest endpoint should be added that allows users, given a token, to unsubscribe and change the property that represents the subscription to false.
As well sending emails via the mail service should be linked to a property and should not send the email if the user that it represents proves to be unsubscribed, an alternative unvalidated endpoint should be offered as well.
The ubsubscribe link should be somehow injected into the template, when we talk about templates.
But since headers are also a thing, this requires integration with whatever the email service provider is; for the default provider, mailgun, the documentation must be check to ensure things are in sync, since the process should work in a provider agnostic way and not make a hard link.
Emails are going to spam due to the lack of unsubscription, this should solve this.
The website that contains information about Itemize in a more user friendly way is down currently.
The repository is at https://github.com/onzag/onzasystems but uses a very old version of itemize that has been heavily reworked so most likely it will need to be redone.
The website is important and should contain key features about what itemize is and what it tries to achieve in a less "geeky" way than github.
This will be a powerful change that separates a lot of the work of programmers and translators with the work of designers, leading to efficiency as designers will not anymore need programmers to do even complex changes in the structure of the application, and can just modify the website in real time in production builds.
The documentation is very incomplete, and almost nothing is documented this prevents any newcomer or literally anyone from using this platform.
Documentation should be written for at least the stable modules and go on from there.
The server side cache is very powerful and provides ways to cache what it refers as IDEFQUERY or item definition single get queries.
Item definitions could be marked for caching, via a cacheSince attribute which preferably would be equal to the request limiter since attribute; another option is cacheAll to literally cache everything for that item.
Add event listeners for records added, records removed, and records edited (similar to #10 so #10 should be done) for a none domain, so that is just inform for these changes on a table level to keep them updated in each cluster level cache by the cluster manager.
Every record should then register and store each of its respective IDEFQUERY as well, similar to how the cache worker does it in the client side.
The records stored in the cache should be different than the standard records and during the cache and indexing event we should retrieve both last_modified and created_at in order to be able to keep these records clean as they age, because we don't want a forever growing cache, however we would only clean it as we need to change it.
This will be the job of the cluster manager to mantain these records, and even download and register them; each extended node will have nothing to do with this.
However the appData.cache will then need a search method so that it can then use those cached records to perform its search and will use the local search functions in order to do the matching.
After filtering is done using the local search functions then we would have to use local ordering as well, these are the same functons used in the client side for the cache worker.
Pitfalls.
Unsure whether this will be implemented at all because postgresql is fast as it is, and searches are not very common, so it doesn't have a lot of value unless the system is so big that searches are constantly performed and the database needs a relief.
A way must be devised so that analytics can be supported (for whichever analytics platform is used, eg. google analytics).
Analytics are a powerful tool for management of a website.
While currently they are not required, they certainly should be added later, as having support for these adds quite some value to the platform. Probably via an event driven system that can be linked via a plugin, and informing these actions from the item provider, others will come from popstate.
The functionality for includes, which allows to extend an item definition with another is not yet implemented or is otherwise untested.
These are.
The includes should be added and make work at some point.
Includes have proven less useful than expected so this issue has little priority and in fact it might be better to remove them altogether if it proves messy to implement them.
The files type has no handler and no fast prototyping entry nor view.
While this is not currently very relevant because the text type can inject the file property as media, and that's the best way to utilize such property, there should be a handler and fast prototyping entry and view in order to maintain consistency.
It should be similar to the file type but allowing to display lots of files.
QuillJS does not support the entirety of the text specification and does not provide a nice way to fully edit templatized fragments and/or build fragments.
SlateJS https://github.com/ianstormtaylor/slate should support all these configurations out of the box and should replace quill.
Automatic search and search results are not added into consideration for the SSR functionality, this means they do not SSR.
With the beforeSSRRender method this should be achievable as we can now capture props and state on the fly, however caution must be advised with the search state that is loaded from the navigation key, specially after this state must be loaded from the constructor and might not match the server state vs client state; this used to cause issues in the past and was moved to componentDidMount due to the search state loaded from navigation disrupting SSR.
Resources loaded via the HTMLResource loader should also be added into this list of SSR resource, this one should be rather easy.
This should drastically improve page responsiveness and SEO where these items appear, granted, they are not common use for this.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.