Giter Site home page Giter Site logo

ryfeus / lambda-packs Goto Github PK

View Code? Open in Web Editor NEW
1.1K 47.0 239.0 1.8 GB

Precompiled packages for AWS Lambda

License: MIT License

Python 85.80% CSS 0.02% TeX 0.02% JavaScript 0.01% C 1.53% C++ 11.96% Fortran 0.02% HTML 0.04% Makefile 0.02% MATLAB 0.01% XSLT 0.05% CMake 0.03% Roff 0.01% Shell 0.01% Smarty 0.01% Cuda 0.01% Cython 0.49% SWIG 0.01% Java 0.01% Dockerfile 0.01%
aws-lambda phantomjs serverless aws tensorflow keras numpy pandas tesseract sklearn

lambda-packs's Introduction

Hi! ๐Ÿ‘‹

My name is Rustem and I'm a machine learning engineer at Instrumental and AWS ML Hero. Feel free to ask me questions about AWS and ML.

I maintain projects:

  • lambda-packs - Packaged environments for AWS lambda which enable to use various python libraries
  • gcf-packs - Packaged environments for Google Cloud Functions which enable to use various python libraries
  • stepfunctions2processing - Single deployment configurations for AWS Step functions with AWS Batch, AWS Fargate, Amazon SageMaker and more

Check out my articles:

...and talks:

You can find me on LinkedIn and my website ryfeus.io.

lambda-packs's People

Contributors

beomi avatar con-mi avatar dlperf avatar fredliporace avatar martinpeck avatar rvaneijk avatar ryfeus avatar trellixvulnteam avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lambda-packs's Issues

Spacy36

Hi @ryfeus
any chance you can describe the process off how you create those packages so I can build one in python3.6?

Performance issues in the program

Hello,I found a performance issue in the definition of synthetic_dataset_helper ,
tf.data.Dataset.range(num_batches).map was called without num_parallel_calls.
I think it will increase the efficiency of your program if you add this.

The same issues also exist in dataset = tf.data.TextLineDataset(path).skip(1).map

Here is the documemtation of tensorflow to support this thing.

Looking forward to your reply. Btw, I am very glad to create a PR to fix it if you are too busy.

Upgrade Scipy to v1.2.0

Thanks for sharing this, I've been using it for over a year.

Would it be possible to upgrade the sklearn/scipy/numpy bundle to use the latest Scipy version v1.2.0 ?

Unable to execute

Have not done any changes. But on testing in lambda getting the following error message.

Getting the error message : Unable to import module 'service': No module named keras.models

Please help to resolve it.

Help wanted

I am using the Satellite imagery processing pack. I am just wondering how to invoke the function and how to specify which S3 bucket to save the output image?
Best regards,

Socket, urllib timing out, FakeUserAgentError - on unedited app image

App no longer working - I even created a new function directly from the repository and it came up with:

START RequestId: 8129cacb-886e-4ee7-b95a-2a577d13b9a8 Version: $LATEST
[WARNING]	2021-05-20T00:10:20.591Z	8129cacb-886e-4ee7-b95a-2a577d13b9a8	Error occurred during loading data. Trying to use cache server http://d2g6u4gh6d9rq0.cloudfront.net/browsers/fake_useragent_0.1.10.json
Traceback (most recent call last):
  File "/var/lang/lib/python3.6/urllib/request.py", line 1349, in do_open
    encode_chunked=req.has_header('Transfer-encoding'))
  File "/var/lang/lib/python3.6/http/client.py", line 1287, in request
    self._send_request(method, url, body, headers, encode_chunked)
  File "/var/lang/lib/python3.6/http/client.py", line 1333, in _send_request
    self.endheaders(body, encode_chunked=encode_chunked)
  File "/var/lang/lib/python3.6/http/client.py", line 1282, in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
  File "/var/lang/lib/python3.6/http/client.py", line 1042, in _send_output
    self.send(msg)
  File "/var/lang/lib/python3.6/http/client.py", line 980, in send
    self.connect()
  File "/var/lang/lib/python3.6/http/client.py", line 952, in connect
    (self.host,self.port), self.timeout, self.source_address)
  File "/var/lang/lib/python3.6/socket.py", line 724, in create_connection
    raise err
  File "/var/lang/lib/python3.6/socket.py", line 713, in create_connection
    sock.connect(sa)
socket.timeout: timed out

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/var/task/fake_useragent/utils.py", line 67, in get
    context=context,
  File "/var/lang/lib/python3.6/urllib/request.py", line 223, in urlopen
    return opener.open(url, data, timeout)
  File "/var/lang/lib/python3.6/urllib/request.py", line 526, in open
    response = self._open(req, data)
  File "/var/lang/lib/python3.6/urllib/request.py", line 544, in _open
    '_open', req)
  File "/var/lang/lib/python3.6/urllib/request.py", line 504, in _call_chain
    result = func(*args)
  File "/var/lang/lib/python3.6/urllib/request.py", line 1377, in http_open
    return self.do_open(http.client.HTTPConnection, req)
  File "/var/lang/lib/python3.6/urllib/request.py", line 1351, in do_open
    raise URLError(err)
urllib.error.URLError: <urlopen error timed out>

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/var/task/fake_useragent/utils.py", line 166, in load
    verify_ssl=verify_ssl,
  File "/var/task/fake_useragent/utils.py", line 122, in get_browser_versions
    verify_ssl=verify_ssl,
  File "/var/task/fake_useragent/utils.py", line 84, in get
    raise FakeUserAgentError('Maximum amount of retries reached')
fake_useragent.errors.FakeUserAgentError: Maximum amount of retries reachedMaximum amount of retries reached: FakeUserAgentError
Traceback (most recent call last):
  File "/var/task/index.py", line 19, in handler
    chrome_options.add_argument('user-agent='+UserAgent().random)
  File "/var/task/fake_useragent/fake.py", line 69, in __init__
    self.load()
  File "/var/task/fake_useragent/fake.py", line 78, in load
    verify_ssl=self.verify_ssl,
  File "/var/task/fake_useragent/utils.py", line 250, in load_cached
    update(path, use_cache_server=use_cache_server, verify_ssl=verify_ssl)
  File "/var/task/fake_useragent/utils.py", line 245, in update
    write(path, load(use_cache_server=use_cache_server, verify_ssl=verify_ssl))
  File "/var/task/fake_useragent/utils.py", line 189, in load
    verify_ssl=verify_ssl,
  File "/var/task/fake_useragent/utils.py", line 84, in get
    raise FakeUserAgentError('Maximum amount of retries reached')
fake_useragent.errors.FakeUserAgentError: Maximum amount of retries reached

END RequestId: 8129cacb-886e-4ee7-b95a-2a577d13b9a8
REPORT RequestId: 8129cacb-886e-4ee7-b95a-2a577d13b9a8	Duration: 10559.55 ms	Billed Duration: 10560 ms	Memory Size: 1024 MB	Max Memory Used: 51 MB	Init Duration: 102.55 ms	

[Request] Build file for Tesseract

Would you be able to provide your build file for Tesseract?
I used your Zip for Tesseract (for Python 2.7) and it works fine on Lambda.

But I need it for Python 3.7. I used this link to create Tesseract for Lambda with Python 3.7, but Lambda does not recognize tesseract for the same service.py handler in your script.

Any help will be much appreciated.

Thanks!

[WIP] Update Tensorflow to 1.12 (and v2)

Concept

As Tensorflow's latest version is 1.12.0 and planning to release 2.0 near future, we should consider update it to 1.12 and 2.0.

Plan

  • Add Pack with tensorflow 1.12
  • Add Pack with tensorflow 2.0

Issue

Currently, tensorflow 1.12 and 2.0 are bigger than 250MB even though when stripped
-> Split package (and download from s3) or drop other unused things (maybe?)

Issue with skimage "undefined symbol: _PyThreadState_Current"

I'm trying to use the skimage layer but I'm getting the following error:

Unable to import module 'lambda_function': /opt/python/skimage/_shared/geometry.so: undefined symbol: _PyThreadState_Current
It seems that scikit-image has not been built correctly.

Your install of scikit-image appears to be broken.
Try re-installing the package following the instructions at:
http://scikit-image.org/docs/stable/install.html 

Any idea what's causing this?

No package python36-devel available.

I am following the example that how to build Tensorflow.
When I input below command

docker exec -i -t lambdapackgen /bin/bash /outputs/buildPack_py3.sh

error is:
...
No package python36-devel available.
No package python36-virtualenv available.
No package python36-pip available.
...

...
rm: cannot remove 'pip': No such file or directory
rm: cannot remove 'pip-': No such file or directory
rm: cannot remove 'wheel': No such file or directory
rm: cannot remove 'wheel-
': No such file or directory
rm: cannot remove 'easy_install.py': No such file or directory
...

I think this sh doesn't install python36 and then it also doesn't install pip...
How can I fix this?

Thank you for advance!!

[Tensorflow] An error occurred (403) when calling the HeadObject operation: Forbidden: ClientError

Exception Logs

An error occurred (403) when calling the HeadObject operation: Forbidden: ClientError
Traceback (most recent call last):
File "/var/task/index.py", line 125, in handler
downloadFromS3(strBucket,strKey,strFile)
File "/var/task/index.py", line 19, in downloadFromS3
s3_client.download_file(strBucket, strKey, strFile)
File "/var/runtime/boto3/s3/inject.py", line 130, in download_file
extra_args=ExtraArgs, callback=Callback)
File "/var/runtime/boto3/s3/transfer.py", line 299, in download_file
future.result()
File "/var/runtime/s3transfer/futures.py", line 73, in result
return self._coordinator.result()
File "/var/runtime/s3transfer/futures.py", line 233, in result
raise self._exception
ClientError: An error occurred (403) when calling the HeadObject operation: Forbidden

Situation

Used serverless to start this..

serverless install -u https://github.com/ryfeus/lambda-packs/tree/master/tensorflow/source -n tensorflow
cd tensorflow
serverless deploy
serverless invoke --function main --log

However it makes Exception above, that I don't have permission for access to your Bucket, ryfeuslambda.

strBucket = 'ryfeuslambda'
strKey = 'tensorflow/imagenet/imagenet_synset_to_human_label_map.txt'
strFile = '/tmp/imagenet/imagenet_synset_to_human_label_map.txt'
downloadFromS3(strBucket,strKey,strFile)  

is this the situation you intended?

Linux filename uppercase directory not being read

I was trying to install tensorflow using serverless in ubuntu machine and am facing the following error

"ENOENT: no such file or directory, scandir '/tmp/lambda-packs/tensorflow/source' "

Is there anything i can do to rectify this.

Python 3?

Any chance of a massive update to run with Python 3.6? I would be willing to help if you could teach me how you built the python 2 versions.

Query

Hi. I have deployed your code:
lambda-packs/Selenium_Chromium/
on AWS lambda. It works well!!
But when I edit it to runa new website which contains a user login, it is not working. Can you help?

Not able to use tensorflow serverless

serverless install -u https://github.com/ryfeus/lambda-packs/tree/master/Tensorflow/source -n tensorflow

not working, gives

RequestError: Timeout awaiting 'request' for 30000ms
at ClientRequest. (C:\Users\malay\AppData\Roaming\npm\node_modules\serverless\node_modules\got\dist\source\core\index.js:970:65)
at Object.onceWrapper (events.js:482:26)
at ClientRequest.emit (events.js:387:35)
at ClientRequest.emit (domain.js:470:12)
at ClientRequest.origin.emit (C:\Users\malay\AppData\Roaming\npm\node_modules\serverless\node_modules@szmarczak\http-timer\dist\source\index.js:43:20)
at TLSSocket.socketErrorListener (_http_client.js:475:9)
at TLSSocket.emit (events.js:375:28)
at TLSSocket.emit (domain.js:470:12)
at emitErrorNT (internal/streams/destroy.js:106:8)
at emitErrorCloseNT (internal/streams/destroy.js:74:3)
at processTicksAndRejections (internal/process/task_queues.js:82:21)
at Timeout.timeoutHandler [as _onTimeout] (C:\Users\malay\AppData\Roaming\npm\node_modules\serverless\node_modules\got\dist\source\core\utils\timed-out.js:36:25)
at listOnTimeout (internal/timers.js:559:11)
at processTimers (internal/timers.js:500:7)

Issue running Selenium example after modifying and zipping source files

Hello,
I was able to successfully run Pack.zip from the Selenium_PhantomJS dir in Lambda. I then zipped everything in the source dir (without modifying the files at all) and tried to run that zip, but I got the following exception:

Message: 'phantomjs' executable may have wrong permissions. : WebDriverException Traceback (most recent call last): File "/var/task/service.py", line 26, in lambda_handler browser = webdriver.PhantomJS(service_log_path=os.path.devnull, executable_path="/var/task/phantomjs", service_args=['--ignore-ssl-errors=true'], desired_capabilities=dcap) File "/var/task/selenium/webdriver/phantomjs/webdriver.py", line 52, in __init__ self.service.start() File "/var/task/selenium/webdriver/common/service.py", line 76, in start os.path.basename(self.path), self.start_error_message) WebDriverException: Message: 'phantomjs' executable may have wrong permissions.

Any advice?

Thanks!

No module named PIL error when using Skimage_numpy package

When I replace the service.py with the following code and try to test skimage with a simple imread,
Lambda returns:
Unable to import module 'service': No module named PIL

# -*- coding: utf-8 -*-
from skimage import io
import urllib

def handler(event, context):

	urllib.urlretrieve("http://image.pbs.org/video-assets/pbs/operation-wild/177014/images/mezzanine_928.jpg.focalcrop.767x431.50.10.jpg", "/tmp/hi.jpg")
	img = io.imread('/tmp/hi.jpg')

	return 0

Are there any working code examples with skimage and the package together?
Any help would be greatly appreciated.

Proposal: Using Git LFS

As git tracks all diffs on binary files, the size of this repo is going bigger and bigger exponentially.
It makes hard to contribute / or just cloning this repo for newbies.

How about using git-lfs and track all the pack.zip files with it?
It'll make these procedures much faster.

error with specified lambda handler

First of all awesome work!
While trying to use the basic, Skimage, Pack.zip package I got the following error:
Handler 'lambda_handler' missing on module 'service': 'module' object has no attribute 'lambda_handler'

The handler should be service.handler, instead of service.lambda_handler.
When changed it the provided Pack.zip works.

I assume that this might be a problem with other packages as well.

Max memory usage - Internal server error

Hi,
Basically i am using the inception model to extract features for a single image:

pool3 = sess.graph.get_tensor_by_name('incept/pool_3:0')
pool3_features = sess.run(pool3, {'incept/DecodeJpeg/contents:0': data})

However 1/2 of the time the function gives an error (Internal server error) and i can see in cloudWatch that its because of max memory usage. I have increased the memory limit to 1536MB and can't go further.

Any idea ?

lightgbm with tensorflow

I have a similar question as #29 and that is the question of whether lightgbm and tensorflow can be installed in a single lambda or is this just too large for lambda to handle?

TensorFlow Serving?

Any way you could add support for Tensorflow Serving client API's? When I follow the official instructions for packaging, I download 475MB, way over the 256MB limit.

$ python -m pip install tensorflow-serving-api --target .
Collecting tensorflow-serving-api
  Downloading https://files.pythonhosted.org/packages/79/69/1e724c0d98f12b12f9ad583a3df7750e14ec5f06069aa4be8d75a2ab9bb8/tensorflow_serving_api-1.12.0-py2.py3-none-any.whl
...
$ du -hs .
475M    .

Can I build locally or only inside EC2 instance?

I am trying to build the pytorch pack using build-with-docker.sh, that uses the amazonlinux:1 image. However, it is hanging in yum commands when I run it locally and times out. If I run the buildPack_py3.sh directly in an EC2 instance using amazonlinux AMI, it runs without problems.

I thought the purpose behind using docker was to be able to build the packs locally. Am I missing something?

Thanks,

Geeting error on uploading Pack.zip for Tesseract

START RequestId: dd896580-dd19-11e8-b337-198d982a5114 Version: $LATEST
LD_LIBRARY_PATH=/var/task/lib TESSDATA_PREFIX=/var/task ./tesseract /tmp/imgres.png /tmp/result
Start
END RequestId: dd896580-dd19-11e8-b337-198d982a5114
REPORT RequestId: dd896580-dd19-11e8-b337-198d982a5114 Duration: 3003.30 ms Billed Duration: 3000 ms Memory Size: 128 MB Max Memory Used: 38 MB
2018-10-31T14:33:02.588Z dd896580-dd19-11e8-b337-198d982a5114 Task timed out after 3.00 seconds

Connection Refused Only for HTTPS

I'm running this code on Lambda and I modified the handler to accept different test scripts. It works great for HTTP sites, but the connection is refused anytime I load HTTPS. Any thoughts? Below is the error and modified service file.

ERROR

START RequestId: 80ac0c45-5473-442d-882c-3825e89f29d6 Version: $LATEST
<urlopen error [Errno 111] Connection refused>: URLError
Traceback (most recent call last):
File "/var/task/service.py", line 34, in handler
exec(script, globals())
File "", line 36, in
File "", line 2, in test_qGlobalCreateClient
File "/var/task/selenium/webdriver/remote/webdriver.py", line 693, in implicitly_wait
'ms': float(time_to_wait) * 1000})
File "/var/task/selenium/webdriver/remote/webdriver.py", line 234, in execute
response = self.command_executor.execute(driver_command, params)
File "/var/task/selenium/webdriver/remote/remote_connection.py", line 407, in execute
return self._request(command_info[0], url, body=data)
File "/var/task/selenium/webdriver/remote/remote_connection.py", line 477, in _request
resp = opener.open(request, timeout=self._timeout)
File "/usr/lib64/python2.7/urllib2.py", line 429, in open
response = self._open(req, data)
File "/usr/lib64/python2.7/urllib2.py", line 447, in _open
'_open', req)
File "/usr/lib64/python2.7/urllib2.py", line 407, in _call_chain
result = func(*args)
File "/usr/lib64/python2.7/urllib2.py", line 1237, in http_open
return self.do_open(httplib.HTTPConnection, req)
File "/usr/lib64/python2.7/urllib2.py", line 1207, in do_open
raise URLError(err)
URLError: <urlopen error [Errno 111] Connection refused>

SERVICE.PY (excluding imports)

user_agent = ("Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.111 Safari/537.36")
dcap = dict(DesiredCapabilities.PHANTOMJS)
dcap["phantomjs.page.settings.userAgent"] = user_agent
dcap["phantomjs.page.settings.javascriptEnabled"] = True
driver = webdriver.PhantomJS(service_log_path=os.path.devnull, executable_path="/var/task/phantomjs", service_args=['--ignore-ssl-errors=true'], desired_capabilities=dcap)

def handler(event, context):
event_string = json.dumps(event)
dictionary = json.loads(event_string)
input = json.dumps(dictionary["Base64Script"])
script = base64.b64decode(input)
exec(script, globals())
print(os.popen('df -k /tmp ; ls -al /tmp').read())
os.popen('rm -rf /tmp/')

Performance issues about tf.function

Hello! Our static bug checker has found a performance issue in ONNX/lambda-onnx/onnxruntime/transformers/benchmark.py: run_with_tf_optimizations (1),(2),(3) is repeatedly called in a for loop, but there is a tf.function decorated function run_in_graph_mode defined and called in run_with_tf_optimizations.

In that case, when run_with_tf_optimizations is called in a loop, the function run_in_graph_mode will create a new graph every time, and that can trigger tf.function retracing warning.

Similar problems in ONNX-ARM/lambda-onnx-arm-3.8/onnxruntime/transformers/benchmark.py.

Here is the tensorflow document to support it.

Briefly, for better efficiency, it's better to use:

@tf.function
def inner():
    pass

def outer():
    inner()  

than:

def outer():
    @tf.function
    def inner():
        pass
    inner()

Looking forward to your reply.

Deploy fail

Getting python version error while deploying document generator

Resource handler returned message: "The runtime parameter of python2.7 is no longer supported for creating or updating AWS Lambda functions. We recommend you use the new runtime (python3.9) while creating or updating functions. (Service: Lambda, Status Code: 400, Request ID: 7767aad4-e194-48a5-af6a-be7f94eed630)" (RequestToken: 58bfcbfd-1a15-8151-0983-aa9907fce71c, HandlerErrorCode: InvalidRequest)

Trying to change the model

I got permission errors trying to access your bucket, so I decided to unpack your pack, change the s3 arguments to my model in my bucket (a retrained inception v3 model), and zip that. When I upload it, I get a no module named index error, am I doing something wrong? Also I have no idea how to write AWS Lambda functions yet, how would I go about writing the lambda test function? Lets say I want to upload an image from my mobile application using HTTPS to the function and have it classify that? Is that possible?

add opencv/PIL to Keras_tensorflow pack

Hi @ryfeus , Thanks for sharing your great work with the world.

I have a keras model trained on TF backend. I need to serve this through a lambda function. My model takes image as input and uses openCV at training time.

In my testing the keras_tensorflow pack does not have openCV. Is there a way I can include openCV into the Pack ? or what could be a possible alternative ?

What changes do I need to make in this case ? Or what steps do I need to take to add opencv/pil into the keras pack ?

Apologies if the question is naive, I am new to the lambda world.

Thanks
Aman

Getting error on uploading Pack.zip for Tesseract

START RequestId: c1cf2730-90e1-11e8-8015-797f4abdd12c Version: $LATEST
Unable to import module 'main': No module named httplib2

END RequestId: c1cf2730-90e1-11e8-8015-797f4abdd12c
REPORT RequestId: c1cf2730-90e1-11e8-8015-797f4abdd12c	Duration: 0.31 ms	Billed Duration: 100 ms 	Memory Size: 512 MB	Max Memory Used: 20 MB	

Live scraping

It is possible to leave the session open and scrap refreshed data from a page?

Lamda Packs for Tensorflow 1.6 based on Python 3

Hi, I'm trying to create a lamda pack for tensorflow 1.6 based on Python 3.
However I am not able to compress the pack to be less than 50 MB.

Can you please share your taught's on how to do that.

A very expensive bug in your tensorflow example

Hi,
Your Tensorflow example is flawed. You are reloading the graph at every request as a result you are overpaying by almost 40%. By making SESSION a global/module level variable you can save significant amount of time. Hope you are not doing this in production :)

You can see the commit with fix here: https://github.com/AKSHAYUBHAT/lambda-packs/commit/ff704bbd307114c010b434b2b291c6a96b8bee0b

A quick example run showing whether model was loaded or not and time taken to load the model vs time taken to execute.

(portenv)COECISs-MacBook-Pro:tensorflow aub3$ serverless invoke --function main --log
"(array([535, 574, 817, 918, 794]), True, 0.66, 1.78)"
--------------------------------------------------------------------
START RequestId: dc351af0-0988-11e8-94ce-678a08c654a0 Version: $LATEST
END RequestId: dc351af0-0988-11e8-94ce-678a08c654a0
REPORT RequestId: dc351af0-0988-11e8-94ce-678a08c654a0	Duration: 4117.68 ms	Billed Duration: 4200 ms 	Memory Size: 1536 MB	Max Memory Used: 653 MB	


(portenv)COECISs-MacBook-Pro:tensorflow aub3$ serverless invoke --function main --log
"(array([535, 574, 817, 918, 794]), False, 0.0, 1.3)"
--------------------------------------------------------------------
START RequestId: e3a73ba6-0988-11e8-8675-532e4c025f2d Version: $LATEST
END RequestId: e3a73ba6-0988-11e8-8675-532e4c025f2d
REPORT RequestId: e3a73ba6-0988-11e8-8675-532e4c025f2d	Duration: 2800.42 ms	Billed Duration: 2900 ms 	Memory Size: 1536 MB	Max Memory Used: 712 MB	


(portenv)COECISs-MacBook-Pro:tensorflow aub3$ serverless invoke --function main --log
"(array([535, 574, 817, 918, 794]), False, 0.0, 1.29)"
--------------------------------------------------------------------
START RequestId: e7969dd3-0988-11e8-ab4e-ad32ed6b7fad Version: $LATEST
END RequestId: e7969dd3-0988-11e8-ab4e-ad32ed6b7fad
REPORT RequestId: e7969dd3-0988-11e8-ab4e-ad32ed6b7fad	Duration: 2717.17 ms	Billed Duration: 2800 ms 	Memory Size: 1536 MB	Max Memory Used: 747 MB	


(portenv)COECISs-MacBook-Pro:tensorflow aub3$ serverless invoke --function main --log
"(array([535, 574, 817, 918, 794]), False, 0.0, 1.35)"
--------------------------------------------------------------------
START RequestId: ec9c2b24-0988-11e8-955a-cf72c5b8c542 Version: $LATEST
END RequestId: ec9c2b24-0988-11e8-955a-cf72c5b8c542
REPORT RequestId: ec9c2b24-0988-11e8-955a-cf72c5b8c542	Duration: 2833.48 ms	Billed Duration: 2900 ms 	Memory Size: 1536 MB	Max Memory Used: 751 MB

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.