Giter Site home page Giter Site logo

aws-samples / amazon-bedrock-samples Goto Github PK

View Code? Open in Web Editor NEW
421.0 26.0 217.0 740 MB

This repository contains examples for customers to get started using the Amazon Bedrock Service. This contains examples for all available foundational models

Home Page: https://aws.amazon.com/bedrock/

License: MIT No Attribution

Python 2.84% Jupyter Notebook 96.39% Java 0.05% Makefile 0.01% Dockerfile 0.01% Batchfile 0.01% Shell 0.08% JavaScript 0.10% TypeScript 0.51% HTML 0.01% CSS 0.01%
amazon-bedrock bedrock embeddings generative-ai rag amazon-titan knowledge-base langchain

amazon-bedrock-samples's Introduction

Amazon Bedrock Samples

This repository contains pre-built examples to help customers get started with the Amazon Bedrock service.

Contents

Getting Started

To get started with the code examples, ensure you have access to Amazon Bedrock. Then clone this repo and navigate to one of the folders above. Detailed instructions are provided in each folder's README.

Enable AWS IAM permissions for Bedrock

The AWS identity you assume from your environment (which is the Studio/notebook Execution Role from SageMaker, or could be a role or IAM User for self-managed notebooks or other use-cases), must have sufficient AWS IAM permissions to call the Amazon Bedrock service.

To grant Bedrock access to your identity, you can:

  • Open the AWS IAM Console
  • Find your Role (if using SageMaker or otherwise assuming an IAM Role), or else User
  • Select Add Permissions > Create Inline Policy to attach new inline permissions, open the JSON editor and paste in the below example policy:
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "BedrockFullAccess",
            "Effect": "Allow",
            "Action": ["bedrock:*"],
            "Resource": "*"
        }
    ]
}

⚠️ Note: With Amazon SageMaker, your notebook execution role will typically be separate from the user or role that you log in to the AWS Console with. If you'd like to explore the AWS Console for Amazon Bedrock, you'll need to grant permissions to your Console user/role too.

For more information on the fine-grained action and resource permissions in Bedrock, check out the Bedrock Developer Guide.

Contributing

We welcome community contributions! Please see CONTRIBUTING.md for guidelines.

Security

See CONTRIBUTING for more information.

License

This library is licensed under the MIT-0 License. See the LICENSE file.

amazon-bedrock-samples's People

Contributors

aarora79 avatar aniloncloud avatar anupam-dewan avatar barnesrobert avatar celmore25 avatar cfregly avatar chpecora avatar danystinson avatar eashankaushik avatar isingh09 avatar iut62elec avatar jpedram avatar jpil-builder avatar kyleblocksom avatar madhurprash avatar mani-aiml avatar markproy avatar maxtybar avatar mchugh7153 avatar mttanke avatar rodzanto avatar rppth avatar rsgrewal-aws avatar soyer-dev avatar ssinghgai avatar tagekezo avatar tchattha avatar ucegbe avatar visitani avatar zhanyany16 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

amazon-bedrock-samples's Issues

KeyError: 'requestBody' : File "/var/task/lambda_function.py", line 15, in get_named_property event['requestBody']['content']['application/json']['properties']

Hi,

I'm testing "create an action group" for a bedrock agent that can call a lambda function. Creating this action group calls for having an OpenAPI schema which defines the function endpoints, required arguments, and the returns.

However, I'm seeing the following error

LAMBDA_WARNING: Unhandled exception. The most likely cause is an issue in the function code. However, in rare cases, a Lambda runtime update can cause unexpected function behavior. For functions using managed runtimes, runtime updates can be triggered by a function change, or can be applied automatically. To determine if the runtime has been updated, check the runtime version in the INIT_START log entry. If this error correlates with a change in the runtime version, you may be able to mitigate this error by temporarily rolling back to the previous runtime version. For more information, see https://docs.aws.amazon.com/lambda/latest/dg/runtimes-update.html

[ERROR] KeyError: 'requestBody'
Traceback (most recent call last):
  File "/var/task/lambda_function.py", line 97, in lambda_handler
    internal_id = get_named_property(event, "internalId")
  File "/var/task/lambda_function.py", line 15, in get_named_property
    event['requestBody']['content']['application/json']['properties']

Can someone please look into this?

Unable to clone rag-solutions repository

Tried to clone rag-solutions using the following command:
git clone --depth 2 --filter=blob:none --no-checkout https://github.com/aws-samples/amazon-bedrock-samples && cd amazon-bedrock-samples && git checkout main rag-solutions/contextual-chatbot-using-knowledgebase

Getting the following error:
error: unable to read sha1 file of rag-solutions/contextual-chatbot-using-knowledgebase/README.md (84148615badfddcd8dd778b1c8485ae44dab54e1)
Unlink of file 'rag-solutions' failed. Should I try again? (y/n)

See below screenshot:
clone_error

amazon-bedrock-samples/knowledge-bases/0_create_ingest_documents_test_kb.ipynb

ParamValidationError: Parameter validation failed:
Invalid bucket name "<bucket_name>": Bucket name must match the regex "^[a-zA-Z0-9.-_]{1,255}$" or be an ARN matching the regex "^arn:(aws).:(s3|s3-object-lambda):[a-z-0-9]:[0-9]{12}:accesspoint[/:][a-zA-Z0-9-.]{1,63}$|^arn:(aws).*:s3-outposts:[a-z-0-9]+:[0-9]{12}:outpost[/:][a-zA-Z0-9-]{1,63}[/:]accesspoint[/:][a-zA-Z0-9-]{1,63}$"

The index creation for collections should change to FAISS

hi,

According to the bedrock knowledge base document website, now the vector index should choose "faiss" as engine.

If still use the "nmslib" engine as written in the sample code, user will get error like this:

[ERROR] ValidationException: An error occurred (ValidationException) when calling the CreateKnowledgeBase operation: The knowledge base storage configuration provided is invalid... The OpenSearch Serverless engine type associated with your vector index is invalid. Recreate your vector index with one of the following valid engine types: FAISS. Then retry your request.

knowledge base creation issue

code given in this notebook is failing with the below error. while creating the knowledge base.

err=ValidationException('An error occurred (ValidationException) when calling the CreateKnowledgeBase operation: The knowledge base storage configuration provided is invalid... The OpenSearch Serverless engine type associated with your vector index is invalid. Recreate your vector index with one of the following valid engine types: FAISS. Then retry your request.'), type(err)=<class 'botocore.errorfactory.ValidationException'>

i have changed the engine type to 'faiss' then the vector index creation itself failing. May be we need to change the openserach version used in this notebook.

`TransportError Traceback (most recent call last)
Cell In[34], line 2
1 # Create index
----> 2 response = oss_client.indices.create(index=index_name, body=json.dumps(body_json))
3 print('\nCreating index:')
4 print(response)

File ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/opensearchpy/client/utils.py:179, in query_params.._wrapper.._wrapped(*args, **kwargs)
176 if v is not None:
177 params[p] = _escape(v)
--> 179 return func(*args, params=params, headers=headers, **kwargs)

File ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/opensearchpy/client/indices.py:128, in IndicesClient.create(self, index, body, params, headers)
125 if index in SKIP_IN_PATH:
126 raise ValueError("Empty value passed for a required argument 'index'.")
--> 128 return self.transport.perform_request(
129 "PUT", _make_path(index), params=params, headers=headers, body=body
130 )

File ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/opensearchpy/transport.py:409, in Transport.perform_request(self, method, url, headers, params, body)
407 raise e
408 else:
--> 409 raise e
411 else:
412 # connection didn't fail, confirm it's live status
413 self.connection_pool.mark_live(connection)

File ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/opensearchpy/transport.py:370, in Transport.perform_request(self, method, url, headers, params, body)
367 connection = self.get_connection()
369 try:
--> 370 status, headers_response, data = connection.perform_request(
371 method,
372 url,
373 params,
374 body,
375 headers=headers,
376 ignore=ignore,
377 timeout=timeout,
378 )
380 # Lowercase all the header names for consistency in accessing them.
381 headers_response = {
382 header.lower(): value for header, value in headers_response.items()
383 }

File ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/opensearchpy/connection/http_requests.py:230, in RequestsHttpConnection.perform_request(self, method, url, params, body, timeout, allow_redirects, ignore, headers)
217 if (
218 not (200 <= response.status_code < 300)
219 and response.status_code not in ignore
220 ):
221 self.log_request_fail(
222 method,
223 url,
(...)
228 raw_data,
229 )
--> 230 self._raise_error(
231 response.status_code,
232 raw_data,
233 response.headers.get("Content-Type"),
234 )
236 self.log_request_success(
237 method,
238 url,
(...)
243 duration,
244 )
246 return response.status_code, response.headers, raw_data

File ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/opensearchpy/connection/base.py:301, in Connection._raise_error(self, status_code, raw_data, content_type)
298 except (ValueError, TypeError) as err:
299 logger.warning("Undecodable raw error response from server: %s", err)
--> 301 raise HTTP_EXCEPTIONS.get(status_code, TransportError)(
302 status_code, error_message, additional_info
303 )

TransportError: TransportError(500, 'exception', 'Internal error occurred while processing request')`

text-2-sql-agent

Agent was not able to answer correctly the question : "In what year did Hank Aaron hit the most home runs?"

Trace : "The query failed, so the batting table does not exist or does not contain home run data. Looking at the schema again, there is no batting table. I will try querying the player table instead which contains basic player information."

So the agent answer : "Based on the available data, Hank Aaron was born in the year 1934. The SQL query used was: SELECT p.name_first, p.name_last, p.birth_year AS yearid, 0 AS hr FROM player p WHERE p.name_first = 'Hank' AND p.name_last = 'Aaron' ORDER BY yearid DESC LIMIT 1;"

Root cause : the provided TheHistoryofBaseball.zip contains :

  • hall_of_fame.csv, player.csv,
  • player_award_csv,
  • salary.csv
  • but it seems that batting table is missing.

In https://github.com/aws-samples/amazon-bedrock-samples/tree/main/agents/insurance-agent-sendreminders, it is not specified whether "user input" should be set to "yes" or "no"

In https://github.com/aws-samples/amazon-bedrock-samples/tree/main/agents/insurance-agent-sendreminders, it is not specified whether "user input" should be set to "yes" or "no"

I suggest adding screenshots of exactly what should be done in the console UI, similar to what is done for the "agentsforbedrock-retailagent" sample here: https://github.com/aws-samples/amazon-bedrock-samples/tree/main/agents/agentsforbedrock-retailagent

retail agent

When you ask the agent you want shoes, he ask your name.
If your name is not present in the database, the agent will not be able to manage this edge case.

Root cause :

  • csbot_lambfa_function doesn't manage empty result
    Pasted image 20240225120628

error deploying `bedrock-agent-cdk` example

I got the following error when following the instructions in bedrock-agent-cdk. /cc: @rppth

cdk bootstrap
/Users/jritsema/.npm/_npx/1bf7c3c15bf47d04/node_modules/ts-node/src/index.ts:859
    return new TSError(diagnosticText, diagnosticCodes, diagnostics);
           ^
TSError: ⨯ Unable to compile TypeScript:
bin/bedrock-agent-cdk.ts:3:23 - error TS2307: Cannot find module 'path' or its corresponding type declarations.

3 import * as path from 'path';
                        ~~~~~~
bin/bedrock-agent-cdk.ts:4:23 - error TS2307: Cannot find module 'glob' or its corresponding type declarations.

4 import * as glob from 'glob';
                        ~~~~~~
bin/bedrock-agent-cdk.ts:5:22 - error TS2307: Cannot find module 'aws-cdk-lib' or its corresponding type declarations.

5 import * as cdk from 'aws-cdk-lib';
                       ~~~~~~~~~~~~~

    at createTSError (/Users/jritsema/.npm/_npx/1bf7c3c15bf47d04/node_modules/ts-node/src/index.ts:859:12)
    at reportTSError (/Users/jritsema/.npm/_npx/1bf7c3c15bf47d04/node_modules/ts-node/src/index.ts:863:19)
    at getOutput (/Users/jritsema/.npm/_npx/1bf7c3c15bf47d04/node_modules/ts-node/src/index.ts:1077:36)
    at Object.compile (/Users/jritsema/.npm/_npx/1bf7c3c15bf47d04/node_modules/ts-node/src/index.ts:1433:41)
    at Module.m._compile (/Users/jritsema/.npm/_npx/1bf7c3c15bf47d04/node_modules/ts-node/src/index.ts:1617:30)
    at Module._extensions..js (node:internal/modules/cjs/loader:1310:10)
    at Object.require.extensions.<computed> [as .ts] (/Users/jritsema/.npm/_npx/1bf7c3c15bf47d04/node_modules/ts-node/src/index.ts:1621:12)
    at Module.load (node:internal/modules/cjs/loader:1119:32)
    at Function.Module._load (node:internal/modules/cjs/loader:960:12)
    at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12) {
  diagnosticCodes: [ 2307, 2307, 2307 ]
}

Here are my dependencies

cdk --version
2.113.0 (build ccd534a)

node --version
v18.17.1

npm --version
9.6.7

Custom model import - Llama 3 8B sample: Wrong torch_type in PEFT model

Hello -- The sample for importing a custom model into Bedrock has a bug when loading the PEFT model.

The current script will finish training but will result in a model that Bedrock cannot load.

# load PEFT model
model = AutoPeftModelForCausalLM.from_pretrained(
    training_args.output_dir,
    low_cpu_mem_usage=True,
    torch_dtype=torch_dtype
)

The issue appears to be fixed when changing the torch_dtype definition to torch.float16

Knowledge bases notebook fails with Llama-index Import Error

Notebook knowledge-bases/4_customized-rag-retreive-api-titan-lite-evaluation fails with ImportError, error message below. There seems to be circular dependency with ChatResponseAsyncGen module.

Steps to reproduce:

  1. Run the notebook as is from SageMaker Studio classic notebook instance with DataScience 3.0 kernel.

ImportError: cannot import name 'ChatResponseAsyncGen' from partially initialized module 'llama_index.llms' (most likely due to a circular import) (/opt/conda/lib/python3.10/site-packages/llama_index/llms/__init__.py)

Full Stack trace below.

---------------------------------------------------------------------------
ImportError                               Traceback (most recent call last)
Cell In[7], line 5
      3 from botocore.client import Config
      4 from langchain.llms.bedrock import Bedrock
----> 5 from llama_index import (
      6     ServiceContext,
      7     set_global_service_context
      8 )
      9 from langchain.embeddings.bedrock import BedrockEmbeddings
     10 from llama_index.embeddings import LangchainEmbedding

File /opt/conda/lib/python3.10/site-packages/llama_index/__init__.py:13
     10 from typing import Callable, Optional
     12 # import global eval handler
---> 13 from llama_index.callbacks.global_handlers import set_global_handler
     14 from llama_index.data_structs.struct_type import IndexStructType
     16 # embeddings

File /opt/conda/lib/python3.10/site-packages/llama_index/callbacks/__init__.py:7
      5 from .open_inference_callback import OpenInferenceCallbackHandler
      6 from .schema import CBEvent, CBEventType, EventPayload
----> 7 from .token_counting import TokenCountingHandler
      8 from .utils import trace_method
      9 from .wandb_callback import WandbCallbackHandler

File /opt/conda/lib/python3.10/site-packages/llama_index/callbacks/token_counting.py:6
      4 from llama_index.callbacks.base_handler import BaseCallbackHandler
      5 from llama_index.callbacks.schema import CBEventType, EventPayload
----> 6 from llama_index.utilities.token_counting import TokenCounter
      7 from llama_index.utils import get_tokenizer
     10 @dataclass
     11 class TokenCountingEvent:

File /opt/conda/lib/python3.10/site-packages/llama_index/utilities/token_counting.py:6
      1 # Modified from:
      2 # https://github.com/nyno-ai/openai-token-counter
      4 from typing import Any, Callable, Dict, List, Optional
----> 6 from llama_index.llms import ChatMessage, MessageRole
      7 from llama_index.utils import get_tokenizer
     10 class TokenCounter:

File /opt/conda/lib/python3.10/site-packages/llama_index/llms/__init__.py:2
      1 from llama_index.llms.ai21 import AI21
----> 2 from llama_index.llms.anthropic import Anthropic
      3 from llama_index.llms.anyscale import Anyscale
      4 from llama_index.llms.azure_openai import AzureOpenAI

File /opt/conda/lib/python3.10/site-packages/llama_index/llms/anthropic/__init__.py:1
----> 1 from llama_index.llms.anthropic.base import Anthropic
      3 __all__ = ["Anthropic"]

File /opt/conda/lib/python3.10/site-packages/llama_index/llms/anthropic/base.py:3
      1 from typing import Any, Callable, Dict, Optional, Sequence
----> 3 from llama_index.core.base.llms.types import (
      4     ChatMessage,
      5     ChatResponse,
      6     ChatResponseAsyncGen,
      7     ChatResponseGen,
      8     CompletionResponse,
      9     CompletionResponseAsyncGen,
     10     CompletionResponseGen,
     11     LLMMetadata,
     12     MessageRole,
     13 )
     14 from llama_index.core.bridge.pydantic import Field, PrivateAttr
     15 from llama_index.core.callbacks import CallbackManager

File /opt/conda/lib/python3.10/site-packages/llama_index/core/__init__.py:2
      1 from llama_index.core.base_query_engine import BaseQueryEngine
----> 2 from llama_index.core.base_retriever import BaseRetriever
      4 __all__ = ["BaseRetriever", "BaseQueryEngine"]

File /opt/conda/lib/python3.10/site-packages/llama_index/core/base_retriever.py:7
      5 from llama_index.prompts.mixin import PromptDictType, PromptMixin, PromptMixinType
      6 from llama_index.schema import NodeWithScore, QueryBundle, QueryType
----> 7 from llama_index.service_context import ServiceContext
     10 class BaseRetriever(PromptMixin):
     11     """Base retriever."""

File /opt/conda/lib/python3.10/site-packages/llama_index/service_context.py:9
      7 from llama_index.callbacks.base import CallbackManager
      8 from llama_index.embeddings.base import BaseEmbedding
----> 9 from llama_index.embeddings.utils import EmbedType, resolve_embed_model
     10 from llama_index.indices.prompt_helper import PromptHelper
     11 from llama_index.llm_predictor import LLMPredictor

File /opt/conda/lib/python3.10/site-packages/llama_index/embeddings/__init__.py:20
     18 from llama_index.embeddings.google_palm import GooglePaLMEmbedding
     19 from llama_index.embeddings.gradient import GradientEmbedding
---> 20 from llama_index.embeddings.huggingface import (
     21     HuggingFaceEmbedding,
     22     HuggingFaceInferenceAPIEmbedding,
     23     HuggingFaceInferenceAPIEmbeddings,
     24 )
     25 from llama_index.embeddings.huggingface_optimum import OptimumEmbedding
     26 from llama_index.embeddings.huggingface_utils import DEFAULT_HUGGINGFACE_EMBEDDING_MODEL

File /opt/conda/lib/python3.10/site-packages/llama_index/embeddings/huggingface.py:17
     11 from llama_index.embeddings.huggingface_utils import (
     12     DEFAULT_HUGGINGFACE_EMBEDDING_MODEL,
     13     format_query,
     14     format_text,
     15 )
     16 from llama_index.embeddings.pooling import Pooling
---> 17 from llama_index.llms.huggingface import HuggingFaceInferenceAPI
     18 from llama_index.utils import get_cache_dir, infer_torch_device
     20 if TYPE_CHECKING:

File /opt/conda/lib/python3.10/site-packages/llama_index/llms/huggingface.py:11
      6 from llama_index.callbacks import CallbackManager
      7 from llama_index.constants import (
      8     DEFAULT_CONTEXT_WINDOW,
      9     DEFAULT_NUM_OUTPUTS,
     10 )
---> 11 from llama_index.llms import ChatResponseAsyncGen, CompletionResponseAsyncGen
     12 from llama_index.llms.base import (
     13     LLM,
     14     ChatMessage,
   (...)
     22     llm_completion_callback,
     23 )
     24 from llama_index.llms.custom import CustomLLM

ImportError: cannot import name 'ChatResponseAsyncGen' from partially initialized module 'llama_index.llms' (most likely due to a circular import) (/opt/conda/lib/python3.10/site-packages/llama_index/llms/__init__.py)


Create Guardrails using Infrastructure as Code (IaC)

Rather than just using the Python3 Boto3 SDK to deploy the guardrails, it would be good if there was an e2e solution that incorporated in IaC (e.g. CloudFormation) and an end-user application (e.g. Streamlit)

Customer relationship agent fails to deploy

Stack deployment fails at StreamlitAutoScalingTarget with error "Resource handler returned message: "Unable to assume IAM role: arn:aws:iam::942514891246:role/aws-service-role/ecs.application-autoscaling.amazonaws.com/AWSServiceRoleForApplicationAutoScaling_ECSService (Service: ApplicationAutoScaling, Status Code: 400, Request ID: 8e0a21e4-bcba-42dc-9649-a637db947a72)" (RequestToken: cd83145c-37e2-74b0-84d5-5e70b2fcb052, HandlerErrorCode: InvalidRequest)"

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.