Giter Site home page Giter Site logo

agnostiqhq / covalent-braket-plugin Goto Github PK

View Code? Open in Web Editor NEW
7.0 8.0 1.0 426 KB

Executor plugin interfacing Covalent with Amazon Braket Hybrid Jobs

Home Page: https://covalent.xyz

License: Apache License 2.0

Python 86.24% Dockerfile 1.43% HCL 12.34%
python docker workflow parallelization pipelines python3 quantum-computing quantum-machine-learning braket aws

covalent-braket-plugin's Introduction

 

covalent python tests codecov apache

Covalent Braket Hybrid Jobs Plugin

Covalent is a Pythonic workflow tool used to execute tasks on advanced computing hardware. This executor plugin interfaces Covalent with AWS Braket Hybrid Jobs

Installing

To use this plugin with Covalent, install it with pip:

pip install covalent-braket-plugin

Usage Example

The following workflow prepares a uniform superposition of the single-qubit standard basis states and measures it.

import covalent as ct
from covalent_braket_plugin.braket import BraketExecutor
import os

# AWS resources to pass to the executor
credentials_file = "~/.aws/credentials"
profile = "default"
s3_bucket_name = "braket_s3_bucket"
ecr_image_uri = "111223344.dkr.ecr.us-east-1.amazonaws.com/amazon-braket-ecr-repo:latest"
iam_role_name = "covalent-braket-iam-role"

ex = BraketExecutor(
    profile=profile,
    credentials=credentials_file,
    s3_bucket_name=s3_bucket_name,
    ecr_image_uri=ecr_image_uri,
    braket_job_execution_role_name=iam_role_name,
    quantum_device="arn:aws:braket:::device/quantum-simulator/amazon/sv1",
    classical_device="ml.m5.large",
    storage=30,
    time_limit=300,
)


@ct.electron(executor=ex)
def simple_quantum_task(num_qubits: int):
    import pennylane as qml

    # These are passed to the Hybrid Jobs container at runtime
    device_arn = os.environ["AMZN_BRAKET_DEVICE_ARN"]
    s3_bucket = os.environ["AMZN_BRAKET_OUT_S3_BUCKET"]
    s3_task_dir = os.environ["AMZN_BRAKET_TASK_RESULTS_S3_URI"].split(s3_bucket)[1]

    device = qml.device(
        "braket.aws.qubit",
        device_arn=device_arn,
        s3_destination_folder=(s3_bucket, s3_task_dir),
        wires=num_qubits,
    )

    @qml.qnode(device=device)
    def simple_circuit():
        qml.Hadamard(wires=[0])
        return qml.expval(qml.PauliZ(wires=[0]))

    res = simple_circuit().numpy()
    return res


@ct.lattice
def simple_quantum_workflow(num_qubits: int):
    return simple_quantum_task(num_qubits=num_qubits)


dispatch_id = ct.dispatch(simple_quantum_workflow)(1)
result_object = ct.get_result(dispatch_id, wait=True)

# We expect 0 as the result
print("Result:", result_object.result)

To run such workflows, users must have AWS credentials allowing access to Braket, ECR, S3, and some other services. These permissions must be defined in an IAM Role (called "covalent-braket-iam-role" in this example). The AWS documentation has more information about managing Braket access.

Overview of Configuration

See the RTD for how to configure this executor.

Required Cloud Resources

In order to run your workflows with covalent there are a few notable resources that need to be provisioned first. Particularly an S3 bucket must be created, an IAM role with the AmazonBraketFullAccess policy, and a private ECR repo with an uploaded image for the tasks to use.

For more information regarding which cloud resources need to be provisioned visit our read the docs RTD guide for this plugin.

Release Notes

Release notes are available in the Changelog.

Citation

Please use the following citation in any publications:

W. J. Cunningham, S. K. Radha, F. Hasan, J. Kanem, S. W. Neagle, and S. Sanand. Covalent. Zenodo, 2022. https://doi.org/10.5281/zenodo.5903364

License

Covalent is licensed under the Apache 2.0 License. See the LICENSE file or contact the support team for more details.

covalent-braket-plugin's People

Contributors

alejandroesquivel avatar cjao avatar fyzhsn avatar jkanem avatar kessler-frost avatar scottwn avatar venkatbala avatar wingcode avatar wjcunningham7 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Forkers

wingcode

covalent-braket-plugin's Issues

Prepare the braket executor for release

  • The braket executor should assume that infrastructure already exists. Infrastructure provisioning does not need to happen in this executor.
  • Test the existing executor and figure out what enhancements, fixes, tests, and docs need to be done for release.

Move IaC to plugin repo & Pydantic validation models

Overview

The changes required to the plugin repo are minimal. We assets/infra folder where the terraform files are stored. For example, the repo structure would look like this:

- .github
- <PLUGIN_FOLDER>
    - assets
        - **infra**
            - main.tf
            - ...

Also we would need to add pydantic classes for validation of infra and executor arguments, for example:

class ExecutorPluginDefaults(BaseModel):
    """
    Default configuration values for the executor
    """

    credentials: str = ""
    profile: str = ""
    region: str = "us-east-1"
    ...
    retry_attempts: int = 3
    time_limit: int = 300
    poll_freq: int = 10

class ExecutorInfraDefaults(BaseModel):
    """
    Configuration values for provisioning AWS Batch cloud  infrastructure
    """
    prefix: str
    aws_region: str = "us-east-1"
    ...
    credentials: Optional[str] = ""
    profile: Optional[str] = ""
    retry_attempts: Optional[int] = 3
    time_limit: Optional[int] = 300
    poll_freq: Optional[int] = 10


_EXECUTOR_PLUGIN_DEFAULTS = ExecutorPluginDefaults().dict()

EXECUTOR_PLUGIN_NAME = "<PLUGIN_NAME>"

Acceptance Criteria

  • Move infra to repo
  • Create pydantic validation models

Braket Executor not using config values as defaults

Description

When instantiating the executor class, the executor does not use defaults from the config file, hence the following would have undefined values for credentials, profile, time_limit, storage, ect.

executor = ct.executor.BraketExecutor(
    s3_bucket_name = "amazon-braket-covalent-qa-bucket",
    ecr_repo_name = "amazon-braket-covalent-qa-ecr-repo",
    braket_job_execution_role_name = "amazon-braket-covalent-qa-role",
    poll_freq=5
)

We want the executor to fallback to the config if values are not defined.

prescriptive guidance document for QA

Spend no more than 30 minutes on this task

Write an explicit list of instructions for how this executor should be validated for QA. Assume the QA engineer has not used the executor before (but otherwise is familiar with Covalent as an Agnostiq engineer).

The list of instructions should be posted as a comment on this issue. Mention @wjcunningham7 in the comment.

If such instructions already exist in a README or RTD, you can instead link to those instructions.

Acceptance Criteria

  • An explicit list of instructions, or a link to instructions, for how to validate the executor for QA

Update AWS Braket Executor to use AWSExecutor

Acceptance Criteria

  • Pass any required attributes from credentials_file, region, s3_bucket_name, profile, execution_role, and log_group_name to AWSExecutor super class
  • Re-structure executor in order to accord with abstract class methods as defined in RemoteExecutor, namely _upload_task, submit_task, get_status, _poll_task, _query_result, and cancel when appropriate
  • Update unit tests for executor
  • Update doc strings for executor

Support task cancellation

Description

Covalent now supports dispatch cancellation. Executor plugins can opt in to this functionality by registering a job identifier for each task (in this case the job id or ARN) and implementing a cancel method. This method will be invoked by Covalent when the user requests that the task be cancelled.

Methods provided by Covalent

All executor plugins automatically implement the following methods to be invoked in the plugin's run() method.

async def set_job_handle(self, job_handle)

saves a job identifier (job_handle) in Covalent's database. The job_handle can be any JSONable type, typically a string or integer, and should contain whatever information is needed to cancel the job later. For instance, AWS Batch's cancel_job method expects the job's jobId.

async def get_cancel_requested(self) -> bool

queries Covalent if task cancellation has been requested. This can be called at various junctures in the run() method if desired.

The run() method should raise a TaskCancelledError exception upon ascertaining that the backend job has been cancelled.

Methods to be implemented by each plugin:

Each plugin defines the following abstract method to be overridden with backend-specific logic:

async def cancel(self, task_metadata: Dict, job_handle: str) -> bool`

Upon receiving a cancellation request, the Covalent server will invoke this with the following inputs:

  • task_metadata: currently contains the keys dispatch_id and node_id as for the run() method.
  • job_handle: the task's job identifier previously saved using set_job_handle().

In addition to querying Covalent for cancellation requests, the run() method may need to query the compute backend to determine whether the job is in fact cancelled and raise TaskCancelledError if that is the case.

The code below sketches how to use the above methods:

from covalent._shared_files.exceptions import TaskCancelledError

...
async def proceed_if_task_not_cancelled(self):
  if await self.get_cancel_requested():
     self._debug_log(f"Task Cancelled")
     raise TaskCancelledError(f"Braket job {batch_job_name} cancelled")

async def run(self, function: Callable, args: List, kwargs: Dict, task_metadata: Dict) -> Any:
    ...
    await self.proceed_if_task_not_cancelled()
    # upload pickled assets
    ...
    await self.proceed_if_task_not_cancelled()
    # invoke job/task
    await self.set_job_handle(handle=job_handle)
    ...
    # await self.poll_job_status(task_metadata, job_handle)

async def poll_job_status(self, task_metadata, job_id):
    # Boto3 client invocations to check the job status
    # while timeout_not_exceeded:
    #    job_state = client.describe_job(job_id)    
    #    if job_state == "CANCELLED":
    #        raise TaskCancelledError
    #    ...
    #    await asyncio.sleep(poll_freq)
        
async def cancel(self, task_metadata: Dict, job_handle: str) -> None:
  """
    Cancel the batch job
    
    Arg(s)
      task_metadata: Dictionary with the task's dispatch_id and node id
      job_handle: Unique job handle assigned to the task by Batch
  
    Return(s)
      True/False indicating if the job was cancelled
  """
   # boto client invocations to cancel the task
   ...
    if job_cancelled:
        return True
    else:
        # Job cannot be cancelled for one reason or another
        return False

Acceptance Criteria

  • In the run() method:
    • Save the job handle for the task once that is determined.
    • Check whether the job has been cancelled at various junctures
  • Implement cancel method
  • Ensure that the a workflow is tested with cancellation to ensure cancel functionality correctly integrated

Update electron statuses for `AWSBracketExecutor`

Once separation of workflow and electron statuses is done, the electron level statuses need to be updated to accommodate executor dependent statuses. In this case the following status definitions will be updated:

  • REGISTERING - Uploading task and creating Bracket job definition

  • PENDING_BACKEND - Corresponds to Bracket job state Created

  • RUNNING - Corresponds to Bracket job state Running

  • COMPLETING - Bracket job is in the Stopped state, results files being retrieved, temporary files are being deleted

For the end state, there is a separate class called EndState and contains all possible end state statuses like COMPLETED, FAILED, etc. and thus does not need to be added to the executor dependent status.

Acceptance Criteria:

  • Above mentioned statuses need to be updated inside the local executor
  • Tests need to be added to verify if those definitions are as expected

Docker image tagging fails

When I try to run a sample workflow, the workflow fails during the _package_and_upload phase of the executor with the following error:

  File "/var/home/casey/.conda/envs/qa-0.177-3.8/lib/python3.8/site-packages/covalent_braket_plugin/braket.py", line 142, in execute
    ecr_repo_uri = self._package_and_upload(
  File "/var/home/casey/.conda/envs/qa-0.177-3.8/lib/python3.8/site-packages/covalent_braket_plugin/braket.py", line 328, in _package_and_upload
    image.tag(ecr_repo_uri, tag=image_tag)
  File "/var/home/casey/.conda/envs/qa-0.177-3.8/lib/python3.8/site-packages/docker/models/images.py", line 120, in tag
    return self.client.api.tag(self.id, repository, tag=tag, **kwargs)
  File "/var/home/casey/.conda/envs/qa-0.177-3.8/lib/python3.8/site-packages/docker/utils/decorators.py", line 19, in wrapped
    return f(self, resource_id, *args, **kwargs)
  File "/var/home/casey/.conda/envs/qa-0.177-3.8/lib/python3.8/site-packages/docker/api/image.py", line 565, in tag
    self._raise_for_status(res)
  File "/var/home/casey/.conda/envs/qa-0.177-3.8/lib/python3.8/site-packages/docker/api/client.py", line 270, in _raise_for_status
    raise create_api_error_from_http_exception(e)
  File "/var/home/casey/.conda/envs/qa-0.177-3.8/lib/python3.8/site-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
    raise cls(e, response=response, explanation=explanation)

docker.errors.APIError: 500 Server Error for http+docker://localhost/v1.40/images/sha256:bf086f086a08041003c5683ab61ba1c89746f9a31737046f2c5da3093d6d9957/tag?tag=be53a596-a688-48fe-8830-8d448f26aa13-0&repo=348041629502.dkr.ecr.us-east-1.amazonaws.com%2Fcasey-qa-0.2.0%3Abe53a596-a688-48fe-8830-8d448f26aa13-0&force=0: Internal Server Error ("error normalizing image: normalizing name for compat API: invalid reference format")

I think the problem here is that the value of the repo parameter
348041629502.dkr.ecr.us-east-1.amazonaws.com/casey-qa-0.2.0:be53a596-a688-48fe-8830-8d448f26aa13-0
is not a valid image name.

According to the docker docs:

An image name is made up of slash-separated name components, optionally prefixed by a registry hostname. The hostname must comply with standard DNS rules, but may not contain underscores. If a hostname is present, it may optionally be followed by a port number in the format :8080....Name components may contain lowercase letters, digits and separators. A separator is defined as a period, one or two underscores, or one or more dashes. A name component may not start or end with a separator.

In the invocation of image.tag(), the image name is ecr_repo_uri={ecr_repo_name}:{dispatch_id}-{node_id} . This violates the above rules. I can tag with curl by using just ecr_repo_name as the image name, like

curl --unix-socket /run/docker.sock -X POST "http://localhost/v1.40/images/sha256:bf086f086a08041003c5683ab61ba1c89746f9a31737046f2c5da3093d6d9957/tag?tag=be53a596-a688-48fe-8830-8d448f26aa13-0&repo=348041629502.dkr.ecr.us-east-1.amazonaws.com%2Fcasey-qa-0.2.0&force=0"

but do not know the effect of that change on the rest of the code.

Non async boto3 calls blocking main thread

Description

As the executors are intended to be async-aware they are expected to use asynchronous calls for i/o bound operations, currently the official AWS SDK client boto3 only performs synchronous / blocking calls when interacting with Amazon services.

As this executor utilizes boto3 for making requests to AWS services (such as uploading function files, ect.) it currently blocks the main thread affecting parallelization and increasing latency to the covalent server (also viewing the UI). Hence we will need to leverage other client libs that support asyncio like aioboto3 and extend them if necessary.

Update README to reflect transition to pre-built images

Once the Braket container images are being built in the Release workflow and the executor supports an ecr_image_uri instead of an ecr_repo_name, the README needs to be updated to reflect these changes.

Acceptance criteria

  • README is updated

Move executor deployment workflow from core covalent

Acceptance Criteria

  • Move Braket base executor deployment workflow from core covalent to braket repo
  • Ensure it can be triggered manually or callable from another workflow

Input variables should include:

  1. Tag/version of braket executor to use
  2. Version of covalent (to resolve tag for covalent base executor, and also determines which tag to produce for braket executor (covalent-MAJOR_VERSION))
  3. Is Prerelease flag

Update README

Acceptance Criteria

  • The README contains necessary setup permissions including AWS prerequisites

Update Braket executor to return stdout, stderr, and runtime exceptions

The purpose of this issue is to ensure that the Braket executor

  • returns any stdout and stderr printed by tasks using the mechanism defined in #1380. To do this, the run() implementation needs to
    • retrieves the stdout and stderr from the executor backend -- in the case of AWS Executors, by parsing Cloudwatch logs (see how Braket does this)
    • printing those strings to self.task_stdout and self.task_stderr.
  • distinguish task runtime exceptions -- those raised in the executor backend -- from other exceptions originating from the interaction of Covalent and the executor backend. This is explained in more detail in #1390. When an runtime exception occurs, the run() implementation should:
    1. Retrieve whatever stdout and stderr messages have already been printed by the task
    2. Ensure that the exception traceback is appended to the task's stderr.
    3. Print stdout and stderr to self.task_stdout and self.task_stderr, respectively.
    4. Raise a TaskRuntimeError.
      For examples, see how the dask executor now deals with task runtime exceptions.

Note: These changes should be implemented in a backward-compatible manner -- so that the new AWSExecutors should work with Covalent 0.202.0post1 (the current stable AWSExecutors work with the latest develop).

Acceptance criteria:

  • Any stdout and stderr printed by a task before raising an unhandled exception is retrieved and printed to self.task_stdout and self.task_stderr respectively, where self is the executor plugin instance.
  • If a task raises an exception:
    • The traceback is included in the task’s stderr.
    • The run() method raises a TaskRuntimeError.
  • The executor plugin remains compatible with Covalent Core `0.202.0post1.

Bug when unpickling circuit results

The test script included with this plugin will fail during the unpickling step with the following error:

Traceback (most recent call last):
  File "<stdin>", line 2, in <module>
  File "/home/will/miniconda3/envs/covalent/lib/python3.8/site-packages/pennylane/numpy/tensor.py", line 216, in __setstate__
    super().__setstate__(reduced_obj[:-1])
TypeError: __setstate__() argument 1, item 0 must be tuple, not int

Not working with current stable covalent

Covalent version: 0.220.0.post2
Python version: 3.8

Following error is raised when we try to run with the latest braket image:

Traceback (most recent call last):
  File "/opt/ml/code/exec.py", line 24, in <module>
    result = function(*args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/covalent/executor/base.py", line 106, in wrapper_fn
    output = fn(*new_args, **new_kwargs)
  File "/home/avalanche/anaconda3/envs/qa-38/lib/python3.8/site-packages/covalent_dispatcher/_core/runner.py", line 260, in qelectron_compatible_wrapper
  File "/tmp/ipykernel_14208/1216327594.py", line 29, in simple_quantum_task
  File "/usr/local/lib/python3.8/site-packages/pennylane/__init__.py", line 329, in device
    plugin_device_class = plugin_devices[name].load()
  File "/usr/local/lib/python3.8/site-packages/pkg_resources/__init__.py", line 2450, in load
    return self.resolve()
  File "/usr/local/lib/python3.8/site-packages/pkg_resources/__init__.py", line 2456, in resolve
    module = __import__(self.module_name, fromlist=['__name__'], level=0)
  File "/usr/local/lib/python3.8/site-packages/braket/pennylane_plugin/__init__.py", line 14, in <module>
    from braket.pennylane_plugin.ahs_device import (  # noqa: F401
  File "/usr/local/lib/python3.8/site-packages/braket/pennylane_plugin/ahs_device.py", line 50, in <module>
    from .ahs_translation import (
  File "/usr/local/lib/python3.8/site-packages/braket/pennylane_plugin/ahs_translation.py", line 133, in <module>
    def _create_register(coordinates: list[tuple[float, float]]):
TypeError: 'type' object is not subscriptable

The workflow was used to test is:

import covalent as ct
import os

# AWS resources to pass to the executor
braket_job_execution_role_name = "<obtained from tf deployment>"
ecr_image_uri = "<obtained from tf deployment>"
s3_bucket_name = "<obtained from tf deployment>"


# Instantiate the executor
ex = ct.executor.BraketExecutor(
            s3_bucket_name=s3_bucket_name,
            ecr_image_uri=ecr_image_uri,
            braket_job_execution_role_name=braket_job_execution_role_name,
    )


# Execute the following circuit:
# |0> - H - Measure
@ct.electron(executor=ex)
def simple_quantum_task(num_qubits: int):
    import pennylane as qml

    # These are passed to the Hybrid Jobs container at runtime
    device_arn = os.environ["AMZN_BRAKET_DEVICE_ARN"]
    s3_bucket = os.environ["AMZN_BRAKET_OUT_S3_BUCKET"]
    s3_task_dir = os.environ["AMZN_BRAKET_TASK_RESULTS_S3_URI"].split(s3_bucket)[1]

    device = qml.device(
        "braket.aws.qubit",
        device_arn=device_arn,
        s3_destination_folder=(s3_bucket, s3_task_dir),
        wires=num_qubits,
    )

    @qml.qnode(device=device)
    def simple_circuit():
        qml.Hadamard(wires=[0])
        return qml.expval(qml.PauliZ(wires=[0]))

    res = simple_circuit().numpy()
    return res


@ct.lattice
def simple_quantum_workflow(num_qubits: int):
    return simple_quantum_task(num_qubits=num_qubits)

dispatch_id = ct.dispatch(simple_quantum_workflow)(1)
print(dispatch_id)

Hanging or failing Braket tasks when evaluating many quantum circuits

Description

Executing the following braket task seems to hang indefinitely, or fail when parallel=True specified in the device

@ct.electron(
    executor=braket_executor,
    deps_pip=braket_deps_pip
)
def do_quantum_svm(data, features, some_value=None):
    device_arn = os.environ["AMZN_BRAKET_DEVICE_ARN"]
    s3_bucket = os.environ["AMZN_BRAKET_OUT_S3_BUCKET"]
    s3_task_dir = os.environ["AMZN_BRAKET_TASK_RESULTS_S3_URI"].split(s3_bucket)[1]

    device = qml.device(
        "braket.aws.qubit",
        device_arn=device_arn,
        s3_destination_folder=(s3_bucket, s3_task_dir),
        wires=len(features),
        parallel=True
    )
    
    @qml.qnode(device)
    def kernel(x1, x2):
        qml.AngleEmbedding(x1, wires=features)
        qml.adjoint(qml.AngleEmbedding)(x2, wires=features)
        return qml.expval(qml.Projector(range(len(features)), features))

    quantum_kernel = lambda x1, x2: np.array([[kernel(i, j).numpy() for i in x1] for j in x2])
    model = svm.SVC(kernel=quantum_kernel)
    X_train, X_test, y_train, y_test = data
    X_train = X_train[:, features]
    X_test = X_test[:, features]

    model.fit(X_train, y_train)
    score = model.score(X_test, y_test)

    return model, score

The following errors are logged when parallel execution is used:

INFO:backoff:Backing off get_quantum_task(...) for 0.3s (botocore.errorfactory.ResourceNotFoundException: An error occurred (ResourceNotFoundException) when calling the GetQuantumTask operation: Quantum task fec9031b-91e8-42a9-b9de-d670bfb14045 not found)
Traceback (most recent call last):
  File "/opt/ml/code/exec.py", line 24, in <module>
    result = function(*args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/covalent/executor/base.py", line 101, in wrapper_fn
    return TransportableObject(output)
  File "/usr/local/lib/python3.8/site-packages/covalent/_workflow/transport.py", line 47, in __init__
    self._object = base64.b64encode(cloudpickle.dumps(obj)).decode("utf-8")
  File "/usr/local/lib/python3.8/site-packages/cloudpickle/cloudpickle_fast.py", line 73, in dumps
    cp.dump(obj)
  File "/usr/local/lib/python3.8/site-packages/cloudpickle/cloudpickle_fast.py", line 632, in dump
    return Pickler.dump(self, obj)
Traceback (most recent call last): File "/opt/ml/code/exec.py", line 24, in <module> result = function(*args, **kwargs) File "/usr/local/lib/python3.8/site-packages/covalent/executor/base.py", line 101, in wrapper_fn return TransportableObject(output) File "/usr/local/lib/python3.8/site-packages/covalent/_workflow/transport.py", line 47, in __init__ self._object = base64.b64encode(cloudpickle.dumps(obj)).decode("utf-8") File "/usr/local/lib/python3.8/site-packages/cloudpickle/cloudpickle_fast.py", line 73, in dumps cp.dump(obj) File "/usr/local/lib/python3.8/site-packages/cloudpickle/cloudpickle_fast.py", line 632, in dump return Pickler.dump(self, obj)
TypeError: cannot pickle 'SSLContext' object

Adjust Braket executor to use a single pre-built image in ECR

To relieve users having to build a Docker image per task, the bespoke Dockerfiles for each task should be replaced with a single general-purpose Dockerfile, and braket.py should be adjusted to use a single pre-built image in ECR for all tasks. In particular, the docker image should not hardcode any filenames or other variable task metadata. These two steps are intertwined because the logic in braket.py will depend on the structure of the container runtime and vice versa.

Some experimentation may be needed because the Braket docs alone don't tell the full story about what happens at runtime. A working MVP can be found in the prebuilt-image-exp branch.

Acceptance criteria:

  • Create a Dockerfile in the base of the repo which specifies a container image sufficient to run all Braket tasks. It suffices for now to derive the image from public.ecr.aws/covalent/covalent:latest; the image can be optimized later. Build an image and push it to ECR in SDLC and Testing.

  • The ecr_repo_name parameter in the executor constructor should be replaced by ecr_image_uri, which points to an image in ECR. Until our public image goes up, ecr_image_uri can default to your manually uploaded image in Testing.

  • Modify run(), removing all references to Docker. This should simply perform the following steps: 1) Serialize and upload the task inputs to the user-specified s3 bucket. 2) Invoke a Braket job using the image at ecr_image_uri. 3) Poll Braket and either retrieve the results or handle errors.

  • Remove Docker from requirements.txt

  • Adjust unit and functional tests.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.