The website template is from https://html5up.net/.
The news preparation script is from https://github.com/mshahrad/mshahrad.github.io
A lightweight framework that enables serverless users to reduce their bills by harvesting non-serverless compute resources such as their VMs, on-premise servers, or personal computers.
License: Apache License 2.0
The website template is from https://html5up.net/.
The news preparation script is from https://github.com/mshahrad/mshahrad.github.io
Traceback (most recent call last):
File "/home/ghazal/.local/lib/python3.10/site-packages/docker/api/client.py", line 268, in _raise_for_status
response.raise_for_status()
File "/home/ghazal/.local/lib/python3.10/site-packages/requests/models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.41/exec/31e0cc791ad6c49458f8186917ffc52c0f21ac104a1e1d3053a431b11742ee28/start
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/ghazal/de-serverlessization/vm-agent/execution-agent/vmModule.py", line 263, in
streaming_pull_future.result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 439, in result
return self.__get_result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 391, in __get_result
raise self._exception
File "/home/ghazal/de-serverlessization/vm-agent/execution-agent/vmModule.py", line 259, in
streaming_pull_future.result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 446, in result
return self.__get_result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 391, in __get_result
raise self._exception
File "/home/ghazal/.local/lib/python3.10/site-packages/google/cloud/pubsub_v1/subscriber/_protocol/streaming_pull_manager.py", line 126, in _wrap_callback_errors
callback(message)
File "/home/ghazal/de-serverlessization/vm-agent/execution-agent/vmModule.py", line 205, in callback
cont.exec_run("python3 /app/main.py '"+ str(jsonfile).replace('\'','"') + "' " + reqID,detach=False )
File "/home/ghazal/.local/lib/python3.10/site-packages/docker/models/containers.py", line 198, in exec_run
exec_output = self.client.api.exec_start(
File "/home/ghazal/.local/lib/python3.10/site-packages/docker/utils/decorators.py", line 19, in wrapped
return f(self, resource_id, *args, **kwargs)
File "/home/ghazal/.local/lib/python3.10/site-packages/docker/api/exec_api.py", line 167, in exec_start
return self._read_from_socket(res, stream, tty=tty, demux=demux)
File "/home/ghazal/.local/lib/python3.10/site-packages/docker/api/client.py", line 409, in _read_from_socket
socket = self._get_raw_response_socket(response)
File "/home/ghazal/.local/lib/python3.10/site-packages/docker/api/client.py", line 318, in _get_raw_response_socket
self._raise_for_status(response)
File "/home/ghazal/.local/lib/python3.10/site-packages/docker/api/client.py", line 270, in _raise_for_status
raise create_api_error_from_http_exception(e) from e
File "/home/ghazal/.local/lib/python3.10/site-packages/docker/errors.py", line 39, in create_api_error_from_http_exception
raise cls(e, response=response, explanation=explanation) from e
docker.errors.APIError: 500 Server Error for http+docker://localhost/v1.41/exec/31e0cc791ad6c49458f8186917ffc52c0f21ac104a1e1d3053a431b11742ee28/start: Internal Server Error ("Container 8957a6657d56082b51d2c51968472492824111eb48d644ddd00ff157e9137d19 is not running: Exited (137) Less than a second ago")
When testing with the second VM @mfouadga2 noticed that it's not straightforward to add new pub/sub for VMs. We need a methodology for this.
Based on the simulation results, let's go for the Exponential Moving Average policy. Can be found here: https://github.com/ubc-cirrus-lab/de-serverlessization-simuls/blob/main/vm-resource-prediction/Predictor.py
@mfouadga2 should we call it VM agent instead of CPU agent? (https://github.com/ubc-cirrus-lab/de-serverlessization/tree/main/cpu-agent). After all, the agent should be able to profile and make predictions for CPU as well as memory.
The merging function is referenced twice in the code
once:
Text2SpeechCensoringWorkflow_MergedFunction
and the other time
Text2SpeechCensoringWorkflow_MergingPoint
Either by the Header function or ...
Maybe start with a histogram based approach for a moving window of time
From the solver's viewpoint, available resources on VMs includes anything that is not used by host's original workload. In this design, 1) the resources used by offloaded serverless functions should not be counted by the monitoring/prediction agent, and 2) offloaded serverless functions should be capped to not use more resources than what is predicted to be available. I think our current implementation violates this model. Creating this issue for us to think about this more carefully and to fix it.
Exploring whether docker SDK would be faster/more efficient that the syscalls
Just noticed that in the vm agent makefile there are pip install commands. Currently, we have a setup script which takes care of installing all dependencies listed in the requirements file. I suggest consolidating all requirement installations there for easier management.
right now maximum one instance of each function is allowed to run. we need to remove this limit.
this will probably fix #41 too
@mohannashahrad has volunteered to port our benchmarks to Google Cloud Workflows. She should have access to the necessary cloud resources.
@mohannashahrad please use this issue to provide updates on this front in the coming weeks. This is not on our critical path and you can take your time. If you run into problems about the benchmarks you can let myself or @GhazalSdn know.
Thanks for your help.
Mentioned in #13
And include them in a sub-directory under the benchmark
I'll take care of it in the next few days
This task should be done asynchronously, in order to remove old records based on their timestamps.
Instead of hardcoded fan out of 10
Seems that the process ID used in dockerprocstatparser is hard coded. Please look into it and automate it, as reading wrong docker util data messes up the entire VM prediction/execution flow.
A buffer needed to keep the record of recent execution times on the VM side before they are submitted. The submission will be taken care of by #11
We need to add a README.md file under benchmarks. This README should include a table with each benchmark application in a row and should have the following columns:
At some point after executing the tests, I got this error for the vmModule, but after that it was working.
Exception in thread Thread-2 (threaded_function):
Traceback (most recent call last):
File "/usr/lib/python3.10/threading.py", line 1009, in _bootstrap_inner
self.run()
File "/usr/lib/python3.10/threading.py", line 946, in run
self._target(*self._args, **self._kwargs)
File "/home/ghazal/de-serverlessization/vm-agent/execution-agent/vmModule.py", line 81, in threaded_function
for key in lastexectimestamps:
RuntimeError: dictionary changed size during iteration
We decided to make the merging nodes explicit. @GhazalSdn has already started working on this. Creating an issue to keep track of it until completion.
Seems like system logs are committed to the repo. I saw a few examples such as https://github.com/ubc-cirrus-lab/de-serverlessization/blob/main/log_parser/get_workflow_logs/logs/logParser.log and https://github.com/ubc-cirrus-lab/de-serverlessization/blob/main/scheduler/logs/scheduler.log
I suggest adding *.log to the out gitignore file.
So that we don't go the long path (json -> image -> create -> run), every single time.
Fields such as publishTime
, etc. are not needed for pub/sub communication. Please remove any unnecessary field to ensure low overhead for the routing epilogue.
Adding a tests
directory to contain all the test scripts and data.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.