Comments (14)
I have made a number of breaking changes to the cloud library recently without providing guidance for how to keep it up to date. Even without modifications, tracking those changes can be a challenge for a persistent server. Cloudlib has so few users (and almost no feedback) that it hasn't seemed worthwhile to describe the hoops one has to jump through.
That said, I can provide assistance for recent changes, if you can tell me the commit or the date of the last version of cloudlib you were using before the merge (that is, the last "official" cloudlib commit that you had previously merged or based your work on).
from cloud-server.
I'm not exactly sure, but evidence suggests it was circa commit 7c6f9b6.
I'm somewhat doubtful all the modules (HTTP, WWW, LPC) work as-is, but the only major bug was with rsrcd
expecting float
s instead of int
s.
There was also an issue with runtime_error
but I customized errord
a bit to show a slightly more Java-like stack trace when errors are caught and rethrown, and to allow printing stack traces even when the error is ultimately caught (so I can log errors happening from user input), so that was probably because of changes on my end.
I think rsrcd
could probably be patched just by removing "tick usage"
and changing "ticks"
to store a float
as its value, but I'm not quite sure how to address the fact it contains bad data in the first place. Since I have my handy, dandy query_object()
method, I could just brute force readjusting object counts, I already do similar things when an obj has too many clones to keep easy track of, or when searching for objects that aren't aligned with their source code (outdated/missing code).
Also, I'm wondering how tick management works now that there's no separate "ticks"
and "tick usage"
, whereas previously the former limited each call and the latter limited long-term use. I haven't had a chance to look through it in more detail.
I'm mostly only using rsrcd
currently for call_limited()
though.
from cloud-server.
ticks usage
was folded into ticks
. This is not important data that needs to be preserved, so I'd suggest throwing it away, and recreating it to match the current rsrcd
.
from cloud-server.
Alright. In the off chance anyone else has to do this, I temporarily added an int patched;
and put if (!patched) patch();
in rsrcd.c
.
I also added the following function to rsrcd.c
:
void patch()
{
object *objects;
int i;
resources["tick usage"] = nil;
resources["ticks"] = ({ -1, 10, 3600 });
objects = map_values(owners);
for (i = sizeof(objects); --i >= 0; ) {
objects[i]->patch();
}
}
And to rsrc.c
:
void patch()
{
resources["tick usage"] = nil;
resources["ticks"] = ({ 0.0, -1, 0 });
}
This seems to have resolved the issue.
from cloud-server.
Now everything seems to have infinite ticks per every command, which is pretty undesirable.
Edit: Checking a new installation of Cloud Server, this doesn't seem to be a normal result.
Edit 2: Yeah, I just had to rsrc ticks X
where X
is the number of ticks.
from cloud-server.
Is the usage number stored in "ticks"
now just for reporting purposes?
from cloud-server.
I decided to keep the customizations I did to rsrc
for now. In case you think they're good ideas, I'll share the code for the one, which is to swap from a for loop to using pow()
.
private void decay_rsrc(mixed *rsrc, int *grsrc, int when)
{
float usage, decay;
int period, t, delta;
usage = rsrc[RSRC_USAGE];
decay = (float) (100 - grsrc[GRSRC_DECAY]) / 100.0;
period = grsrc[GRSRC_PERIOD];
t = rsrc[RSRC_DECAYTIME];
if ((delta = (when-t)/period) > 0) {
usage *= pow(decay, (float) delta);
if (usage < 0.5) {
t = when;
} else {
t += period * delta;
}
rsrc[RSRC_DECAYTIME] += t;
rsrc[RSRC_USAGE] = floor(usage + 0.5);
}
}
The other, somewhat evident from the above code, was to change from time()
to status()[ST_UPTIME]
, removing the contents of reboot()
and prepare_reboot()
along with downtime
. For the sake of defensive programming, and because I personally use vim which confusingly thinks time
refers to time()
, I've changed all instances of the variable/argument time
to when
.
Technically I use the function now()
instead of the more verbose status()[ST_UPTIME]
but that's just a shorthand I added on /kernel/lib/auto
, because I use it for game logic as well (time()
is basically only used when referring to real world time, e.g. when something was written, ST_UPTIME
is preferred for things like game timestamps to keep things in sync with callouts.)
from cloud-server.
Is the usage number stored in
"ticks"
now just for reporting purposes?
It's been like that for some years. Comparing the cumulative ticks with a limit in every task caused a lot of rollbacks in Hydra, even when the update of cumulative ticks happens from a separate callout.
from cloud-server.
Ah, I see. You probably would've needed some intermediate flag instead of directly comparing ticks, e.g. danger
, which is set to TRUE
whenever the remaining ticks is less than the amount normally guaranteed for each function, and otherwise FALSE
.
That'd still cause a lot of rollbacks within the danger period, though if a user is actually within the danger period, that means they've nearly used up all their ticks. If Hydra were super intelligent, or there was a way to hint to it to do this, that'd be the point where you'd probably want to limit how many threads get assigned to them in the first place. I'm not sure how you'd manage that though, since users are basically an abstract concept as far as DGD itself is concerned.
Or you could emulate the approach I just implemented for non-admin users. As an experiment, I'm using ticks rather than # of commands (along with some "social" tick taxes to make spamming other users more expensive) as my spam limit for end users. Once they run out, their input is delayed until they have more. In other words, once a user enters the danger zone, you could delay any future commands and call outs from them until they've regenerated enough ticks to come back out (or even kick off users with a message like "Out of ticks, come back later.")
I'm not actually using the feature at the admin user level though. End users all get calculated under System, haha. Their own individual tick allotments are on the user object itself. I might see if I can do some wizardry to move them under a different user (e.g. Users or something) just for reporting purposes, but it's fine for now.
from cloud-server.
The way it currently works is that data from /kernel/sys/rsrcd
is read to determine how many ticks are available at the start of call_limited
, and the updated tick count is stored via a callout in /kernel/obj/rsrc
.
If you wanted to know at the beginning of call_limited
if the cumulative limit was not exceeded, you have to get that data from /kernel/obj/rsrc
in one way or another. Then a cumulative ticks update from another task, occurring during the execution of call_limited
, will cause a rollback of the call_limited
task.
from cloud-server.
Your danger
flag idea sort of fits in with this. /kernel/obj/rsrc
could forward this data to /kernel/sys/rsrcd
when the cumulative tick count passes a limit, the way it already does with the timestamp.
from cloud-server.
You'd probably want to limit or otherwise manage call outs too, or else someone malicious could fill the server with as many call outs as it can take, each wasting as many ticks as possible, and they'd all presumably run before it gets to the first tick increment. It'd be fine as a general, soft limit though, and since there are finite call outs, even that kind of malicious behavior would eventually have a limit anyway.
Kernel coding on Hydra sounds pretty tough.
from cloud-server.
The original kernel library used to do that, but I no longer aspire to that level of control. It's now about eventual recovery from accidents, rather than "you don't get to spend those ticks that you don't have a budget for." Preventing malicious attacks from guest coders is not possible in general, so I don't even try.
Kernel coding on Hydra sounds pretty tough.
It can be thought of as an extension of Hydra coding, which is very tough indeed.
from cloud-server.
Over the weekend I brought back a tick usage limit, which you can see with "rsrc ticks" and change with "quota ticks usage float
". This time, there should be no effect on Hydra's efficiency.
from cloud-server.
Related Issues (3)
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from cloud-server.