dhis2 / app-runtime Goto Github PK
View Code? Open in Web Editor NEWA singular runtime dependency for applications on the DHIS2 platform
Home Page: https://runtime.dhis2.nu
A singular runtime dependency for applications on the DHIS2 platform
Home Page: https://runtime.dhis2.nu
The documentation on the mutation type implies that a mutation of type create can use the id property:
Property Type Description ... id string or () => string Required for update
anddelete
mutations, otherwise optional. Indicates that a particular instance of a collection will be mutated...
However using the id property in a create mutation yields an error:
The type
property on a mutation query is reserved for telling the engine what kind of operation should be done.
The endpoint for adding a generic gateway configuration for sms expects a type
parameter (in this case with the content "http").
EDIT: Actually every type of gateway (generic, bulksms, clickatell, etc) need to have a type specified.
Proposed solution:
Use a params
object, just like the query object for fetching data.
false
should be encoded as "false"
here
Whenever we want to query a nested graph. We may want to insert dynamic properties inside the resource.
const validOrgUnitsQuery = {
orgUnits: ({ dataSetId, orgUnit }) => ( {{
resource: `dataSets/${dataSetId}/organisationUnits/gist`,
params: {
fields: ["id"],
filter: `path:like:${orgUnit}`,
total: true,
},
}),
};
and later I may want to do
const { loading, error, data, refetch } = useDataQuery(
validOrgUnitsQuery,
{ variables: { dataSetId: selectedDataSet?.id, orgUnit: selectedOrgUnit?.id })
Currently, it is not supported getting the following error object
{
"type": "invalid-query",
"details": [
"[orgUnits] Property resource must be a string"
]
}
Continuing what I meant in this comment:
What I'll try to talk about here won't have any implications on the code in the near future, it's just a thought about what direction we could try to head towards:
One of the general goals is to keep the data engine and the links separate from implementation.
@amcgee already separated the whole functionality into engine, links and react.
In the long term, we'll want to have an architecture that's framework & workflow agnostic,
so we'll need different connecting "middleware" between the app and the data engine.
For now we're focused purely on react, I think we'll end up with situations where it's preferable to put data logic into the state domain.
Let's say for now that we're react based, so the highest level of data engine application is the react context in our application, which means that we're effectively wrapping the whole application with the react-connecting domain's context provider.
This is where redux can pick up. It's been a standard for quite some time to use react-redux, which has its own provider for the store, so once the engine and links themselves have been changed to a pure, functional paradigm, we can provide an provider and reducers/actions that leverages on the react-redux provider, which gets the context (baseUrl, etc.) from the react-based context.
I think this will allow us to separate data logic into two domains:
My idea for now is, that in the future, we have data services (right now it's config
and `data), the runtime as a wrapper itself and connectors, which are the man in the middle between the services and the app. This will include the react domain (what we have right now) and - because we have a lot of redux apps - a redux domain.
With the goal "keep the data engine and the links separate from implementation", we need to keep the react & redux part separate, so the closer the code is to the engine and links in this library, the blurrier the lines will be between those. Also the data
folder would get filled with other folders (right now it's just react, but it can be also "redux" and "angular"). That's why I'd separate these domains completely, as I'd group the connectors in their own folder/namespace anyways.
The current idea in my head for redux is (this is far from a mature implementation yet):
Provider
, which will store the context in the store through registering an addition reducercreateReducer
/ createActionCreator
helpers, which accept a namespace to prefix action types and uses thunks internally to have access to the context state, which can be used to have more or less the same api as the useDataQuery
hookuseDataQuery
is great for simple use cases, especially for queries that are independent and fired only once. It's also incredibly easy to fire a request in an app-platform app, due to no setup required.
However, it's not very flexible. We're using react-query under the hood, but do not expose most of the options.
I will try to explain some of the issues I've encountered while working with useDataQuery
, and some suggestions for how we can improve it. There's a TLDR with suggestions at the end.
The query-objects passed to useDataQuery
is static, and cannot be changed.
To facilitate "dynamic" fields and parameters, we support having a callback in the object. However, this is not supported for resource
, which in my opinion is quite an arbitrary limitation, and there's even an open issue from a third party dev about this. In many cases a static resource
is fine, but not if you try to make generic components.
There's actually a hacky workaround to this (passing resource: '/'
, and the actual resource endpoint as part of the dynamic id: id: (id) => `${resource}/id
).
Nevertheless, I think this callback in the query-object is confusing, especially to new devs.
I understand why it's done, because useDataQuery
was implemented first without react-query
, and thus didn't have deterministic hashing of the query. But since then we've moved to use react-query
under the hood, and I think we should leverage the functionality it provides.
It's been argued that static-query-object makes it possible to statically analyse the code and potentially "hoist" requests to eg. the "page"-level. I think the theory behind this is interesting, but for one, I don't think it's something we will ever do in practice. Second, I think this is something the app should have more control over, and eg. do in the router (see react-router loader). Third, if we really wanted to do this - it should still be possible to identify queries that are indeed static (we would still need to do this today, due to dynamic params). Forth; in my opinion it sacrifices too much in terms of DX and limitations for a theoretical use-case.
react-query
has a concept of query-keys
, which is a way to uniquely identify a query. It can be any object, and it's deterministically hashed. Since our query-object
uniquely identifies a query, it's perfect for a query-key
. We could easily simplify our query-objects and remove the callbacks from params and ids - and by using these as keys directly, it would support dynamic parameters everywhere - and react-query
would handle refetching if any of the keys in the object change.
Another common use-case is to refetch a query whenever a dependent variable changes. Since the query-object is static, it's obviously not reactive - and thus the only way to send another request with different parameters is to use refetch
. This forces consumers into fetching imperatively (call refetch from useEffect
, instead of declaratively (refetch when dependencies change).
In our API, refetch
is responsible both for "force refetch identical data", and "refetch when variables change". refetch
in react-query
is intended to imperatively force an update of the data - not to be used as the main mechanism to refetch with new variables. With our API there's no way to differentiate between these use-cases, which cause duplicate requests in certain situations (see below).
In the latest react
-docs updates, the team is discouraging the use of useEffect
. See you might not need an effect and separating event from effects.
In the new docs, the team is adamant that we cannot ignore any dependencies in useEffect
, which has been useful if you depend on a certain value, but don't want the effect to run every time that value changes. This is one of the problems by depending on useEffect
for refetching - there's no way to choose which dependencies that should be reactive.
Of course, there has to be a useEffect
somewhere, since a request is an effect. So react-query
"hides" the useEffect
, and thus you really don't need to use useEffect
that often. This is one of the reasons why the react community and documentation encourages you think it through whenever you use an useEffect
. It's not great DX to have to handle useEffect
in user-land for fetching data.
From react-devs:
Keep in mind that modern frameworks provide more efficient built-in data fetching mechanisms than writing Effects directly in your components.
I think we should leverage this.
react-query
's refetch
ignores the usual query-key
checking, and always does a refetch - which makes sense when used in react-query
. However, since we're using refetch
as a main way to refetch data - I've seen a lot of duplicate requests. This will happen unless you have a isFirstRender
-check in the component that calls refetch
from useEffect
(or use lazy: true
).
I will show an example of what I mean.
const query = {
result: {
resource: `/dataElements`,
params: ({ page, filter}) => ({
page,
filter,
}),
}
}
const MyComponent = () => {
const [{ filter, page }, setQueryParams] = useQueryParams();
const { data, refetch } = useDataQuery(query);
useEffect(() => {
refetch({ filter, page }); // will send a request even if filter and page are the same as previous request
}, [refetch, filter, page]);
return <div>{data}</div>;
};
The useEffect
will be called on mount, and thus you will have a duplicate request.
You could of course anticipate this, and pass lazy: true
.
This totally works, but it's something you have to be aware of when using useDataQuery
. There's also a fair amount of code, compared to using react-query
directly:
const [{ filter, page }] = useQueryParams();
const query = { // assume simplified query-object
resource: 'dataElements',
params: { page, filter })
};
const { data } = useQuery({ queryKey: [query] });
Multiple queries in the same query-object can be powerful, and it's incredibly easy to send requests in parallel. However, due to engine.query()
using Promise.all
, the resulting loading
(and error
-prop) are tied to the Promise.all
-promise (from react-query
s perspective it's really just one query). It's not really suited to eg. load a paginated list of two different resources. If you refetch one, the other list will also go in a loading state. See my example here: multiple resources in one query.
If we want to support multiple queries on the engine-level
, I'd argue we should support engine.queries(queries: ResourceQuery[]). Or just use useQueries
from react-query in user-land. This would let us get query-status for each query.
With regard to this, I would also argue we should consider simplifying query-objects to be the ResourceQuery
- eg. instead of
{
const query = { // instead of this
dataElements: {
resource: 'resource'
params: {
fields: ['id', 'name']
}
}
}
}
// do this
const dataElementQuery = {
resource: 'resource'
params: {
fields: ['id', 'name']
}
}
I realize this would be quite the breaking change. But it makes typing the query-objects easier, and it's more in line with how react-query
works. I would also argue the vast majority of requests do not use multiple resources in the same query-object. And typing the resulting object data.dataElements.dataElements
isn't very ergonomic - the amount of times I've forgotten the extra dataElements
are not insignificant.
We could potentially support both for some time to be backwards compatible. Eg.
public query(query: Query | ResourceQuery, options: QueryExecuteOptions)
Or even make it a permanent feature - it would be easy to check if it's a Query or ResourceQuery (`typeof query.resource === 'string'), but it would complicate the implementation and would need function overloads.
The actual DHIS2-specific fetching logic lives in engine.query
. useDataQuery
really doesn't do much; it's basically a thin wrapper around react-query useQuery
, with engine.query
as the fetcher-function. Again - the original implementation was not built with react-query
in mind, so this is understandable. But in the current state I believe it would be beneficial to just use react-query
directly, and use engine.query
as the fetcher-function. We've done this quite successfully in eg. the aggregate-data-entry-app
.
engine.query
is great, and handles all the annoying parts about data-fetching; handling errors, constructing the url, encoding query-parameters, etc. I think that is the right level of abstraction for an API. The frontend-world moves too quickly for us to reliably keep up with all the modern tool that we and third-party developers want to adopt. And thus, I think it makes sense to provide a simple, agnostic API for data-fetching (which we already do), and let the client orchestrate state binding to whatever framework they use. This is not to say that we shouldn't provide common hooks, but allow the consumer some freedom of what to use.
There's nothing really stopping you from using react-query
with engine.query
right now - which has been proved in the aggregate-data-entry-app
. The queryClient
is also already setup in the app-shell, so it's really easy to use. A simple solution could be "ok, use react-query
directly if you want". And we could keep using useDataQuery
for simple use-cases. However, I want to verify what path we should take - so we agree that this is OK to do. Because right now the fact that we use react-query
is an implementation detail, and from an API point of view, we shouldn't just assume it's there.
The benefit of this is we would use the same cache as the header-bar and the query-client. But we would have to document the fact that there's a queryClient
-provider in the tree, and it's available for use.
I've also worked on mapping query-objects to full model types (example here). By returning these types from engine.query
, while using useQuery
directly, the result would be correctly typed and inferred from the engine.query()
call.
Combined with simplified queries as mentioned earlier, this allows us to use all the functionality of react-query
, while still having a common, declarative representation of the query/request.
Depending on if we want to be "framework agnostic" or "make it as easy as possible", we could either drop react-query
entirely from app-runtime
, and just expose engine.query
- and have examples of usages with react-query
. Or we could continue to setup a default QueryClient
and render the provider - so the client could use react-query
without any setup.
In the latter case, we could make this backwards compatible, by still supporting useDataQuery
(and maybe deprecate it). But officially support the use of react-query
directly.
Another alternative would be to keep useDataQuery(query)
, but simply pass all options to react-query
. Something like this:
type UseDataQueryOptions = Omit<UseQueryOptions, 'queryFn' | 'queryKey' >
function useDataQuery(query: ResourceQuery, options: UseDataQueryOptions) {
return useQuery({ queryKey: [query], ...options} )
}
I have managed to "parse" query-objects to types. However there will always be limitations of what we can type from an object like this. Currently, I'm just mapping eg. a resource like dataElements
, to the matching model type. If we were to "simplify" our query-objects, we may have to think about the role of id
-in the object. If we were to make everything inside the query "dynamic", there would be nothing stopping you from doing something like this: resource: `dataSets/${dataSetId}/organisationUnits/gist`
- and we wouldn't be able to type that. However, you would probably only be able to pass in "known" resources to engine.query
(when using typescript), and would get a type-error if doing that, which could be limiting. So I believe the right way would be doing something like this:
resource: 'dataSets'
id: `${dataSetId}/organisationUnits/gist` // are we ok with this?
Above example is quite unique to the gist-API though. Typing this would be pretty hard - but I think it would be ok to return type as unknown
in cases like this - and let the app pass in the type.
We could also support gist: boolean
in ResourceQuery
, that can be used to use the gist API.
I've lots of thoughts about this, but I will end it here so it doesn't grow more out of scope, but it's definitely something to keep in mind while we're revisiting this.
useDataQuery
in favour of using react-query
directly with engine.query
as fetcher-function.
QueryClientProvider
in app-shell, to prevent more setup in apps. But this comes at the cost of depending on react-query
in app-runtime
, and we would need to maintain react-query version. But we probably need it there for header-bar and other internal hooks anyway? It does have the benefit of re-using cache between app-shell and client - which could be a major benefit!All in all, I think these changes would:
react-query
is ubiquitous, and most react-devs should be familiar with it.params
.isSuccess
- data will be inferred to not be null, etc.engine.query
. This does not use queryClient.fetchQuery
- it sends the request directly. So you cannot do imperative fetching while hitting the cache.When using the <DataProvider>
I have to specify an apiVersion
prop like so:
<DataProvider baseUrl={REACT_APP_DHIS2_BASE_URL} apiVersion="">
Without apiVersion
several requests to the backend fail:
It would be nice if the default value of apiVersion were ""
, or if the headerbar would return an error if the prop is missing. I'd prefer to leave it out, if that wouldn't conflict with anything else.
I think the refactoring is going the exact right direction!
There are some things that we can still improve, I think:
Currently we're using a lot of state inside the engine and the engine links.
This causes that all consumers of the engine (like the react connecting
functionality) will be limited to the having to accept that the state is
outside of its frame, while react with it's context and life-cycle
hooks/methods can deal very easily and well with that on its own.
So what I'd like to see is that the engine itself is purely functional and -
when needed - accepts a config object rather than storing it itself, giving
the connecting functionality the ability to store anything in it's own way
(like react's context + life cycle) while giving more implicit connecting
functionality (maybe redux, or angular, etc.) the ability to do it their way as
well
When looking at how the useDataQuery works, to me it seems that it basically behave like subscribing to an observer
// triggers a rerender on every emit
const { loading, error, data, refetch }
// subscription is happening here
= useDataQuery({ ... })
We're dealing with (potentially cached or memoized) requests over time,
as there are repeatable step-sequences ([load -> error/data] -> repeat
)
So - when taking into account that we're trying to cater to react apps - we
could consider to change the data engine to be observer/subscription-like,
and then leverage on that.
The current approach uses promises, which reflects only one slice of time of
the whole lifetime, so I think it's not the right abstraction model here.
After the refactor I think code became quite nested and it was slightly harder
for me to get a proper overview, so first of all, I'd like to reduce the depth
of the tree and propose a directory that keeps the react implementation
isolated
src/
├── connectors/
│ ├── react/
│ │ ├── hooks
│ │ ├── context
│ │ └── components
│ └── react.js
├── engine/
│ ├── createContext.js
│ ├── query.js / executeQuery.js / subscribeToQuery.js
│ └── mutate.js
└── link/
├── RestApiLink/ # Now: src/engine/link/RestApiLink/helpers
│ └── ...
├── ErrorLink.js
├── CustomDataLink.js
├── RestApiLink.js
└── ....
All of the following code examples are more or less pseudo code
I'll use the hyperactiv
lib style to demonstrate what I mean, but that's by no means what I'd suggest, that's a completely different discussion.
Note 1: All code example are not meant as "Use this instead", then it'd be a PR.
I just want to demonstrate an alternative approach and I think code talks
better than I in this case.
Note 2: I'm sure I got a few things wrong here with variables
and passing context
to the link
function, but you know what I mean
This would be a functional approach to the link.
It's still composable and stateless, which - in my experience - increases
range where we can apply it.
import { fetchData } from './RestAPILink/networkFetch'
const restFetch = (
{
baseUrl,
apiVersion,
},
options,
) => {
const apiPath = joinPath('api', String(apiVersion))
const apiEndpoint = joinPath(baseUrl, apiPath)
return fetchData(apiEndpoint, options)
}
const link = (context, type, query, { signal }) =>
restFetch(
queryToResourcePath(context, query),
queryToRequestOptions(type, query, signal)
)
And this is what the src/engine/subscribeToQuery.js
could look like:
/**
* @param {Object} args
* @param {Function} args.link
* @param {Object} args.query
* @param {Object} args.context
* @param {Function} args.onChange
* @param {Object} [args.variables]
*/
const subscribeToQuery = ({
link,
query,
context,
onChange,
variables,
}) => {
const refetchData = observe({
count: 0,
refetch: null,
abort: null
})
const state = observe({
loading: true,
error: null,
data: null,
}, { batch: true })
computed(() => {
const { count } = refetchData
refetchData.refetch = () => { refetchData.count = count + 1 }
})
computed(() => {
const { baseUrl, apiVersion, apiUrl, fetch } = context
// not used, just triggers this function to be called gagin
const { count } = refetchData
const controller = new AbortController()
const abort = () => controller.abort()
Promise.all(
queries.map(
q => {
const resolvedQuery = resolveDynamicQuery(q, variables)
link(context, query, controller.signal)
}
)
)
.then(data => {
if (!controller.signal.aborted) {
state.loading = false
state.data = reduceResponses(data)
}
})
.catch(error => {
if (!controller.signal.aborted) {
state.loading = false
state.error = error
}
})
refetchData.abort = abort
})
computed(() => {
const { loading, error, data } = state
const { refetch, abort } = refetchData
onChange({
...refetchData,
refetch,
abort,
})
})
return () => {
dispose(refetchData)
dispose(state)
}
}
So e. g. the react context could look like something like this:
import { createContext } from '../engine/index.js'
const DataProvider = props => {
const [ context, setContext ] = useState(createContext(props.options))
useEffect(
() => { setContext(createContext(props.options)) },
[ props.options ]
)
return (
<DataContext.Provider value={context}>
{props.children}
</DataContext.Provider>
)
}
This could be an approach of the useDataQuery
hook:
const useDataQuery = (query, variables) => {
const [ queryState, setQueryState ] = useState({ loading: true })
const context = useContext(DataContext)
useEffect(
// subscribeToQuery returns the unsubscribe function
() => subscribeToQuery({
link: context.link,
query,
context,
onChange: setQueryState,
variables,
}),
[ query ],
)
return queryState
}
This allows us to extract all logic from the react parts,
we simply have to setup the engine per query, which is relatively easy because
both the hooks and the engine use the Observer pattern
devDependency
@babel/cli was updated from 7.4.4
to 7.5.0
.devDependency
@babel/core was updated from 7.4.5
to 7.5.0
.devDependency
@babel/plugin-proposal-class-properties was updated from 7.4.4
to 7.5.0
.devDependency
@babel/plugin-proposal-object-rest-spread was updated from 7.4.4
to 7.5.0
.devDependency
@babel/plugin-transform-runtime was updated from 7.4.4
to 7.5.0
.devDependency
@babel/preset-env was updated from 7.4.5
to 7.5.0
.devDependency
@babel/runtime was updated from 7.4.5
to 7.5.0
.This version is covered by your current version range and after updating it in your project the build failed.
This monorepo update includes releases of one or more dependencies which all belong to the babel7 group definition.
babel7 is a devDependency of this project. It might not break your production code or affect downstream projects, but probably breaks your build or test tools, which may prevent deploying or publishing.
There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.
Your Greenkeeper Bot 🌴
When executing a mutation, the server returns a JSON object in the response body which includes the statusCode
, statusMessage
, and response
which has a uid
property (usually?). This will be unexpected for useDataMutation
consumers, since the status information should be dealt with (and stripped) by the engine.
This will make debugging much less annoying
1.16.4
to 1.16.5
.This version is covered by your current version range and after updating it in your project the build failed.
rollup is a devDependency of this project. It might not break your production code or affect downstream projects, but probably breaks your build or test tools, which may prevent deploying or publishing.
2019-07-04
The new version differs by 4 commits.
c4dbb51
1.16.5
ee4f0fe
Update changelog
b1afb6e
Do not skip onwarn handler when --silent is used (#2981)
ef7486d
Make tests run on Node 12, fix watcher cleanup issue (#2982)
See the full diff
There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.
Your Greenkeeper Bot 🌴
We handle 401, 403 and 409 errors here: https://github.com/dhis2/app-runtime/blob/master/services/data/src/links/RestAPILink/fetchData.ts#L13. We should add 404's as well.
Currently yarn start
only watches for changes in the CRA example directory
https://swr.now.sh/ does a lot of what we currently support, with many features we'd like to have but haven't implemented yet including:
mutate
function)I think this would simplify the data service a bit, shouldn't require any API changes so should be opaque to the consumer, and looks fairly robust. useSWR
could just call engine.query
as its fetcher so it also fits nicely into our layering model. I haven't tested the library yet, though, and we'd need to evaluate the bundle and complexity costs of adding app-runtime
's first external dependency.
Minimal reproduction: https://codesandbox.io/s/suspicious-glade-z275n
Slack discussion: https://dhis2.slack.com/archives/CLJSVA1RV/p1604569475159100
The above sandbox simulates what happens when an app need props and fetched data to be in sync. Because useEffect triggers the refetch after the Visualization component receives the updated props, the loading state will not be triggered yet. So for a short moment the component will have new props and old data.
I've solved this in the app for now, by marking data as stale manually when certain props update. This can get a bit complex, so it'd be good to discuss the recommended approach.
Currently we duplicate a lot of the build tooling and configuration
The link to Query type definition at the bottom of app-runtime/blob/master/docs/types/Query.md is broken and displays a 404 page.
These are the remaining pieces to be incorporated into v2 of @dhis2/app-runtime
(currently living on the next
branch)
params
) (#115)Documentation and examples, not critical for cutting 2.0
react-redux
and redux-thunk
)Deferred to a 2.x minor release (or 3.0):
services
and runtime
, use cli-app-scripts
instead of rolling our own configapp-runtime
package
Currently the mutation functionality expects the query to be one exact mutation, unlike the query structure when trying to fetch data.
Although there's no scenario yet, I'm very sure it'll come up in the future anyways (especially once we start working on the maintenance app)
Currently multiple mutations have to be done individually and in parallel, which introduces a bit of imperative code into the applications when they have to do this (Promise.all([mutationOne(), mutationTwo(), ...])
)
Graphql supports specifying multiple mutation operations in one query.
Hey @amcgee,
I was thinking about redux integration of the data engine. While discussing the new maintenance app with @Birkbjo, he said it might be beneficial to store the user's privileges and all the metadata's required privileges in the store so it doesn't need to be handled in many different components. So I was wondering what would need to be done in order to get the data engine and redux running together. I've written some naive pseudo code, that illustrates what I've come up with:
I think the easiest way to use it is by adding some middleware:
const dataEngineMiddleware = engine => {
// storing the args that will be passed to the param function
// in the query
const paramArgs = {}
// middleware function
return store => next => action => {
const { meta, payload } = action
if (meta && meta.namespace) {
const { namespace } = meta
const { query, args } = payload
const prevParamArgs = paramArgs[namespace] || {}
// persist args
paramArgs[namespace] = { ...paramArgs, ...(args || {}) }
// apply args to query.params, if it is a function
const realQuery = {
...query,
params: typeof query.params === 'function'
? query.params(paramArgs[namespace])
: query.params,
}
// set loading state
dispatch({ type: `${namespace}_QUERY_LOADING` })
// load data and set appropriate state on response/error
engine.query(realQuery)
.then(data => {
dispatch({ type: `${namespace}_QUERY_DONE`, payload: { data } })
})
.catch(error => {
dispatch({ type: `${namespace}_QUERY_ERROR`, payload: { error } })
})
}
}
}
We just have to set up the store inside the App component, not outside:
// configureStore.js
const configureStore = (engine, initialState) => {
const composeEnhancer = window.__REDUX_DEVTOOLS_EXTENSION_COMPOSE__
? window.__REDUX_DEVTOOLS_EXTENSION_COMPOSE__
: compose
const store = createStore(
rootReducer,
initialState,
composeEnhancer(
applyMiddleware(
thunk,
dataEngineMiddleware(engine),
),
)
)
return store
}
// app.js
const App = () => {
const engine = useDataEngine()
const [ store ] = useState(configureStore(engine))
return (
<Provider store={store}>
{/* ... */}
</Provider>
)
}
Then we need a function that creates a reducer and an action creator:
// setting up the reducer
const dataQueryReducer = ({ namespace, query }) => {
const defaultState = { loading: false, error: null, data: null }
const reducer = (state = defaultState, action) => {
if (action.type === `${namespace}_QUERY_LOADING`) {
return { ...state, loading: true, error: null }
}
if (action.type === `${namespace}_QUERY_ERROR`) {
return { ...state, loading: false, error: action.payload.error }
}
if (action.type === `${namespace}_QUERY_DONE`) {
return { ...state, loading: false, data: action.payload.data }
}
if (action.type === `${namespace}_QUERY_ABORT`) {
return { ...state, loading: false }
}
return state
}
const doFetch = args => ({
type: 'DATA_ENGINE_DO_FETCH',
payload: { query, args },
meta: { namespace }
})
return { reducer, doFetch }
}
Then it should be relatively simple to create reducers:
// creating a data query reducer
import { dataQueryReducer } from '@dhis2/app-runtime'
const { reducer, doFetch } = dataQueryReducer({
namespace: 'ORG_UNITS'
query: {
resource: 'organisationUnits',
params: ({ page }) => ({
page,
pageCount: 20,
id: 'FOOOOOOOO',
})
}
})
export {
reducer as organisationUnitsReducer,
doFetch as fetchOrganisationUnits,
}
// creating the root reducer
import { organisationUnitsReducer } from './organisationUnits'
const rootReducer = combineReducers({
organisationUnits: organisationUnitsReducer,
// ... more reducers
})
And pretty simple to use that in components:
import { useDispatch, useSelector } from 'react-redux'
import { fetchOrganisationUnits } from '../../redux/organisationUnits'
const OrgUnits = () => {
const dispatch = useDispatch()
const loading = useSelector(state => state.organisationUnits.loading)
const error = useSelector(state => state.organisationUnits.error)
const units = useSelector(state => state.organisationUnits.data)
const fetchThem = () => dispatch(fetchOrganisationUnits({ page: 2 }))
return (
<>
<button onClick={fetchThem}>Update units</button>
{loading && <span>Loading units...</span>}
{error && <span>Error: ${error}</span>}
{!loading && !error && units.map(unit => <p key={unit.id}>{unit.displayName}</p>)}
</>
)
}
Current usage for DataMutation requires the child function to take a single object - an array with two elements (mutate
and state
). This would be cleaner as two arguments, and in-line with Apollo's Mutation component, but would require a breaking change.
Dependabot couldn't authenticate with registry.npmjs.org.
You can provide authentication details in your Dependabot dashboard by clicking into the account menu (in the top right) and selecting 'Config variables'.
I know @amcgee have talked about this before, but I was wondering about this again and just wanted to verify that the approach I've taken fits with the intentions of the app-platform.
In the scheduler-app we're using final-form. For submitting a form final-form accepts a callback that returns a promise. The promise should resolve with an error on validation errors, and reject with an error on any other kind of error.
In implementing this I noticed that the promise that the mutation function (from the useDataMutation hook) returns does not reject with the errors if there are any (if I'm remembering that correctly). So I've created a hook for it:
const mutation = {
resource: 'jobConfigurations',
type: 'update',
partial: true,
id: ({ id }) => id,
data: ({ job }) => job,
}
const useUpdateJob = ({ id }) => {
const engine = useDataEngine()
const updateJob = job =>
engine
.mutate(mutation, { variables: { job, id } })
.then(() => {
history.push('/')
})
.catch(error => {
const isValidationError = error.type === 'access'
// Potential validation error, return it in a format final-form can handle
if (isValidationError) {
return formatError(error)
}
// Throw any unexpected errors
throw error
})
return [updateJob]
}
The above works. But it would be nice if the returned promise from useDataMutation would reject with errors and only resolve on success, so that I could implement the above logic without creating a custom hook. Would that be possible, or are there limitations that prevent us from implementing that?
Let me know if I'm overlooking anything obvious, it's been a while since we've talked about this, so I might be misremembering parts.
2.x
branch failed. 🚨I recommend you give this issue a high priority, so other packages depending on you can benefit from your bug fixes and new features again.
You can find below the list of errors reported by semantic-release. Each one of them has to be resolved in order to automatically publish your package. I’m sure you can fix this 💪.
Errors are usually caused by a misconfiguration or an authentication problem. With each error reported below you will find explanation and guidance to help you to resolve it.
Once all the errors are resolved, semantic-release will release your package the next time you push a commit to the 2.x
branch. You can also manually restart the failed CI job that runs semantic-release.
If you are not sure how to resolve this, here are some links that can help you:
If those don’t help, or if this issue is reporting something you think isn’t right, you can always ask the humans behind semantic-release.
2.12.0
on branch 2.x
cannot be published as it is out of range.Based on the releases published on other branches, only versions within the range >=2.11.1 <2.12.0-beta.1
can be published from branch 2.x
.
The following commit is responsible for the invalid release:
This commit should be moved to a valid branch with git merge or git cherry-pick and removed from branch 2.x
with git revert or git reset.
A valid branch could be 2.x
or master
.
See the workflow configuration documentation for more details.
Good luck with your project ✨
Your semantic-release bot 📦🚀
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.