Giter Site home page Giter Site logo

ai-in-a-box's Introduction

AI-in-a-Box

FTA AI-in-a-Box: Deployment Accelerator

AI-in-a-Box leverages the collective expertise of Microsoft Customer Engineers and Architects across the globe to develop and provide AI and ML solutions to the technical community.

Our intent is to present a curated collection of solution accelerators that can help engineers establish their AI/ML environments and solutions rapidly and with minimal friction, while maintaining the highest standards of quality and efficiency.

As we continue to learn from the market, the contributors will look to equip the community with the tools and resources necessary to succeed in the ever-evolving AI and ML landscape.

Why AI-in-a-Box?

  • Accelerated Deployment: Speed up your solutions with our proven, ready-to-use patterns.
  • Cost Savings: Maximize your budget by reusing existing code and patterns.
  • Enhanced Quality & Reliability: Trust in our solutions, validated through real-world scenarios.
  • Competitive Advantage: Outpace competitors by accelerating solution deployment.


FTA AI-in-a-Box: Deployment Accelerator

Available Guidance

Topic Description
Responsible AI This provides essential guidance on the responsible use of AI and LLM technologies.
Security for Generative AI Applications This document provides specific security guidance for Generative AI (GenAI) applications.
Scaling OpenAI Applications This document contains best practices for scaling OpenAI applications within Azure.

Available “-in-a-Box” accelerators

Pattern Description Supported Use Cases and Features
Azure ML Operationalization in-a-box Boilerplate Data Science project from model development to deployment and monitoring
  • End-to-end MLOps project template
  • Outer Loop (infrastructure setup)
  • Inner Loop (model creation and deployment lifecycle)
  • Edge AI in-a-box Edge AI from model creation to deployment on Edge Device(s)
  • Create a model and deploy to Edge Device
  • Outer Loop Infrastructure Setup (IoT Hub, IoT Edge, Edge VM, Container Registry, Azure ML)
  • Inner Loop (model creation and deployment)
  • Doc Intelligence in-a-box This accelerator enables companies to automate PDF form processing, modernize operations, save time, and cut costs as part of their digital transformation journey.
  • Receive PDF Forms
  • Function App and Logic App for Orchestration
  • Document Intelligence Model creation for form processing and content extraction
  • Saves PDF data in Azure Cosmos DB
  • Image and Video Analysis in-a-box Extracts information from images and videos with Azure AI Vision and sends the results along with the prompt and system message to Azure GPT-4 Turbo with Vision.
  • Orchestration through Azure Data Factory
  • Low code solution, easily extensible for your own use cases through ADF parameters
  • Reuse same solution and deployed resources for many different scenarios
  • Saves GPT4-V results to Azure CosmosDB
  • Cognitive Services Landing Zone in-a-box Minimal enterprise-ready networking and AI Services setup to support most Cognitive Services scenarios in a secure environment
  • Hub-and-Spoke Vnet setup and peering
  • Cognitive Service deployment
  • Private Endpoint setup
  • Private DNS integration with PaaS DNS resolver
  • Semantic Kernel Bot in-a-box Extendable solution accelerator for advanced Azure OpenAI Bots
  • Deploy Azure OpenAI bot to multiple channels (Web, Teams, Slack, etc)
  • Built-in Retrieval-Augmented Generation (RAG) support
  • Implement custom AI Plugins
  • NLP to SQL in-a-box Unleash the power of a cutting-edge speech-enabled SQL query system with Azure OpenAI, Semantic Kernel, and Azure Speech Services. Simply speak your data requests in natural language, and let the magic happen.
  • Allows users to verbally express natural language queries
  • Translate into SQL statements using Azure Speech & AOAI
  • Execute on an Azure SQL DB
  • Assistants API notebooks Harnessing the simplicity of the Assistants API, developers can seamlessly integrate assistants with diverse functionalities, from executing code to retrieving data, empowering users with versatile and dynamic digital assistants tailored to their needs.
  • Offers three main capabilities: Code Interpreter (tech tasks), Retrieval (finding info), and Function calling (task execution)
  • These powers combine to form a versatile super-assistant for handling diverse tasks
  • Assistants API Bot in-a-box This sample provides a step-by-step guide on how to deploy a virtual assistant leveraging the Azure OpenAI Assistants API. It covers the infrastructure deployment, configuration on the AI Studio and Azure Portal, and end-to-end testing examples.
  • Deploy the necessary infrastructure to support an Azure OpenAI Assistant
  • Configure as Assistant with the required tools
  • Connect a Bot Framework application to your Assistant to deploy the chat to multiple channels
  • Key Contacts & Contributors

    If you have any questions or would like to contribute please reach out to: [email protected]

    Contact GitHub ID Email
    Alex Morales @msalemor [email protected]
    Andre Dewes @andredewes [email protected]
    Andrés Padilla @AndresPad [email protected]
    Chris Ayers @codebytes [email protected]
    Eduardo Noriega @EduardoN [email protected]
    Franklin Guimaraes @franklinlindemberg [email protected]
    Jean Hayes @jehayesms [email protected]
    Marco Aurélio Bigélli Cardoso @MarcoABCardoso [email protected]
    Maria Vrabie @MariaVrabie [email protected]
    Neeraj Jhaveri @neerajjhaveri [email protected]
    Thiago Rotta @rottathiago [email protected]
    Victor Santana @Welasco [email protected]
    Sabyasachi Samaddar @ssamadda [email protected]

    ai-in-a-box's People

    Contributors

    andrespad avatar codebytes avatar drewelewis avatar eduardon avatar hyoshioka0128 avatar jehayesms avatar kbaroni avatar marcoabcardoso avatar microsoftopensource avatar msalemor avatar neerajjhaveri avatar olivmertens avatar paul-singh avatar saby007 avatar thomasiverson avatar welasco avatar

    Stargazers

     avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

    Watchers

     avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

    ai-in-a-box's Issues

    [BUG] Issue with bot-in-a-box Gen-AI/Assistant accelerator

    Solution Accelerators
    This repository contains multiple solution accelerators. Please tell us which ones are involved in your report. (Replace the space in between square brackets with an x)

    • bot-in-a-box Gen-AI/Assistant accelerator

    Describe the bug
    Tried deploying the solution multiple times following the instructions but the bot doesn't work. I always get the same error message, whatever the prompt I enter.

    bot-error

    To Reproduce
    Steps to reproduce the behavior:

    1. Deploy the solution according to the instructions in the repo. Make sure to pick Canada-East region since that is the only region where GPT-4 is available.
    2. Go to the Bot service and try "Test WebChat"
    3. Enter something in the prompt
    4. See error

    Expected behavior
    A valid answer.

    [FEATURE REQUEST] Support for downloading updated files

    [Assistant Bot]

    Is your feature request related to a problem? Please describe.
    I'd like to be able to download files generated/updated by an assistant bot.

    Describe the solution you'd like
    The bot should be able to provide a download link to the files it creates.

    Describe alternatives you've considered

    Additional context
    Related: #78

    Issue with running - gen-ai/Assistants/notebooks/autogen/gpt_assistant_agent.ipynb

    Solution Accelerators
    This repository contains multiple solution accelerators. Please tell us which ones are involved in your report. (Replace the space in between square brackets with an x)

    • Cognitive Services Landing Zone
    • Semantic Kernel Bot
    • Azure ML Operationalization
    • gen AI

    Describe the bug
    A clear and concise description of what the bug is.
    I am getting the below error in running the gen-ai/Assistants/notebooks/autogen/gpt_assistant_agent.ipynb file , the code block i am running is below

    code block
    +++++++++++++++
    assistant_config = {
    "tools": [
    {"type": "code_interpreter"}
    ],
    "tool_resources": {
    "code_interpreter": {
    "file_ids": [file.id]
    }
    }
    }

    excel_oai_agent = GPTAssistantAgent(
    name="excel_oai_agent",
    instructions="You are a code assistant tool that can run Python code",
    llm_config=llm_config,
    assistant_config=assistant_config,
    )

    user_proxy.initiate_chat(recipient=excel_oai_agent,
    message="What are the columns in the excel spreadsheet uploaded?"
    ++++++++++++

    Error
    +++++++++
    WARNING:autogen.agentchat.contrib.gpt_assistant_agent:OpenAI client config of GPTAssistantAgent(excel_oai_agent) - model: gpt4-0125
    WARNING:autogen.agentchat.contrib.gpt_assistant_agent:No matching assistant found, creating a new assistant

    BadRequestError: Error code: 400 - {'error': {'message': "Unknown parameter: 'tool_resources'.", 'type': 'invalid_request_error', 'param': 'tool_resources', 'code': 'unknown_parameter'}}

    BadRequestError Traceback (most recent call last)
    Cell In[11], line 12
    1 assistant_config = {
    2 "tools": [
    3 {"type": "code_interpreter"}
    (...)
    9 }
    10 }
    ---> 12 excel_oai_agent = GPTAssistantAgent(
    13 name="excel_oai_agent",
    14 instructions="You are a code assistant tool that can run Python code",
    15 llm_config=llm_config,
    16 assistant_config=assistant_config,
    17 )
    19 user_proxy.initiate_chat(recipient=excel_oai_agent,
    20 message="What are the columns in the excel spreadsheet uploaded?"
    21 )

    File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/autogen/agentchat/contrib/gpt_assistant_agent.py:104, in GPTAssistantAgent.init(self, name, instructions, llm_config, assistant_config, overwrite_instructions, overwrite_tools, **kwargs)
    100 logger.warning(
    101 "No instructions were provided for new assistant. Using default instructions from AssistantAgent.DEFAULT_SYSTEM_MESSAGE."
    102 )
    103 instructions = AssistantAgent.DEFAULT_SYSTEM_MESSAGE
    --> 104 self._openai_assistant = create_gpt_assistant(
    105 self._openai_client,
    106 name=name,
    107 instructions=instructions,
    108 model=model_name,
    109 assistant_config=openai_assistant_cfg,
    110 )
    111 else:
    112 logger.warning(
    113 "Matching assistant found, using the first matching assistant: %s",
    114 candidate_assistants[0].dict,
    115 )

    File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/autogen/oai/openai_utils.py:762, in create_gpt_assistant(client, name, instructions, model, assistant_config)
    759 assistant_create_kwargs["file_ids"] = assistant_config.get("file_ids", [])
    761 logging.info(f"Creating assistant with config: {assistant_create_kwargs}")
    --> 762 return client.beta.assistants.create(name=name, instructions=instructions, model=model, **assistant_create_kwargs)

    File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/openai/resources/beta/assistants.py:156, in Assistants.create(self, model, description, instructions, metadata, name, response_format, temperature, tool_resources, tools, top_p, extra_headers, extra_query, extra_body, timeout)
    90 """
    91 Create an assistant with a model and instructions.
    92
    (...)
    153 timeout: Override the client-level default timeout for this request, in seconds
    154 """
    155 extra_headers = {"OpenAI-Beta": "assistants=v2", **(extra_headers or {})}
    --> 156 return self._post(
    157 "/assistants",
    158 body=maybe_transform(
    159 {
    160 "model": model,
    161 "description": description,
    162 "instructions": instructions,
    163 "metadata": metadata,
    164 "name": name,
    165 "response_format": response_format,
    166 "temperature": temperature,
    167 "tool_resources": tool_resources,
    168 "tools": tools,
    169 "top_p": top_p,
    170 },
    171 assistant_create_params.AssistantCreateParams,
    172 ),
    173 options=make_request_options(
    174 extra_headers=extra_headers, extra_query=extra_query, extra_body=extra_body, timeout=timeout
    175 ),
    176 cast_to=Assistant,
    177 )

    File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/openai/_base_client.py:1240, in SyncAPIClient.post(self, path, cast_to, body, options, files, stream, stream_cls)
    1226 def post(
    1227 self,
    1228 path: str,
    (...)
    1235 stream_cls: type[_StreamT] | None = None,
    1236 ) -> ResponseT | _StreamT:
    1237 opts = FinalRequestOptions.construct(
    1238 method="post", url=path, json_data=body, files=to_httpx_files(files), **options
    1239 )
    -> 1240 return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))

    File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/openai/_base_client.py:921, in SyncAPIClient.request(self, cast_to, options, remaining_retries, stream, stream_cls)
    912 def request(
    913 self,
    914 cast_to: Type[ResponseT],
    (...)
    919 stream_cls: type[_StreamT] | None = None,
    920 ) -> ResponseT | _StreamT:
    --> 921 return self._request(
    922 cast_to=cast_to,
    923 options=options,
    924 stream=stream,
    925 stream_cls=stream_cls,
    926 remaining_retries=remaining_retries,
    927 )

    File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/openai/_base_client.py:1020, in SyncAPIClient._request(self, cast_to, options, remaining_retries, stream, stream_cls)
    1017 err.response.read()
    1019 log.debug("Re-raising status error")
    -> 1020 raise self._make_status_error_from_response(err.response) from None
    1022 return self._process_response(
    1023 cast_to=cast_to,
    1024 options=options,
    (...)
    1027 stream_cls=stream_cls,
    1028 )

    ++++++++

    To Reproduce
    Steps to reproduce the behavior:

    1. Go to '.gen-ai/Assistants/notebooks/autogen/gpt_assistant_agent.ipynb file'
    2. run the file code block '.code block
      +++++++++++++++
      assistant_config = {
      "tools": [
      {"type": "code_interpreter"}
      ],
      "tool_resources": {
      "code_interpreter": {
      "file_ids": [file.id]
      }
      }
      }

    excel_oai_agent = GPTAssistantAgent(
    name="excel_oai_agent",
    instructions="You are a code assistant tool that can run Python code",
    llm_config=llm_config,
    assistant_config=assistant_config,
    )

    user_proxy.initiate_chat(recipient=excel_oai_agent,
    message="What are the columns in the excel spreadsheet uploaded?"..'

    1. Scroll down to '....'
    2. See error

    Expected behavior
    A clear and concise description of what you expected to happen.

    Screenshots
    If applicable, add screenshots to help explain your problem.

    Please complete the following information:

    • OS: [e.g. Windows]
    • Browser [e.g. chrome, safari]
    • Version [e.g. 22]

    Additional context
    I am installing all packages in requirement.txt and using azure gpt4-0125 version

    [BUG] Issue with openai version in requirements

    Not the best practice to have requirement file without version number.
    For example, the following works on with specific version of openai:

    from openai.types.beta.threads.message_content_text import MessageContentText

    I have to check when the requirement file is committed into Git, to find the correct version.
    In my case, downgrade openai to 1.11.1

    [BUG] Issue with running the Semantic Kernel Bot

    Solution Accelerators
    This repository contains multiple solution accelerators. Please tell us which ones are involved in your report. (Replace the space in between square brackets with an x)

    • Cognitive Services Landing Zone
    • [X ] Semantic Kernel Bot
    • Azure ML Operationalization
    • Edge AI

    Describe the bug
    When running the solution locally - I get an error:
    Invalid value for 'content': expected a string, got null.\r\nStatus: 400 (model_error)\r\n\r\nContent:\r\n{\n "error": {\n "message": "Invalid value for 'content': expected a string, got null.",\n "type": "invalid_request_error",\n "param": "messages.[8].content",\n "code": null\n }\n}\n\r\n\r\nHeaders:\r\nAccess-Control-Allow-Origin: REDACTED\r\nX-Content-Type-Options: REDACTED\r\nx-ratelimit-remaining-requests: REDACTED\r\napim-request-id: REDACTED\r\nx-ratelimit-remaining-tokens: REDACTED\r\nX-Request-ID: REDACTED\r\nms-azureml-model-error-reason: REDACTED\r\nms-azureml-model-error-statuscode: REDACTED\r\nx-ms-client-request-id: e1043b7c-6211-442f-846d-0c34e7a3fe57\r\nx-ms-region: REDACTED\r\nazureml-model-session: REDACTED\r\nStrict-Transport-Security: REDACTED\r\nDate: Wed, 24 Jan 2024 09:39:45 GMT\r\nContent-Length: 188\r\nContent-Type: application/json\r\n"}

    To Reproduce
    Steps to reproduce the behavior:

    1. Open Solution in Visual Studio
    2. Debug solution and use the emulator
    3. Emulator returns an error
    4. See error

    [BUG] Issue with splitting big files into small chunks

    Solution Accelerators
    This repository contains multiple solution accelerators. Please tell us which ones are involved in your report. (Replace the space in between square brackets with an x)

    • AI Services

    Describe the bug
    This is not entirely a bug. It is more of the current implementation and question.
    Considering that I have a GB or more than that file size to be split. The current implementation is that it would load up into a temp location. How does it behave when a big size of file uploaded as well as the file has many tables/images in it? Is it a suitable function/code for this scenario?

    Thanks

    [BUG] Issue with ... gen-ai/Assistants/api-in-a-box/failed_banks/example

    Getting following error when I try to run in my machine.

    File ~/anaconda3/envs/py310/lib/python3.10/site-packages/openai/_base_client.py:898, in SyncAPIClient._request(self, cast_to, options, remaining_retries, stream, stream_cls)
    895 if not err.response.is_closed:
    896 err.response.read()
    --> 898 raise self._make_status_error_from_response(err.response) from None
    899 except httpx.TimeoutException as err:
    900 if response is not None:

    BadRequestError: Error code: 400 - {'error': {'code': 'invalidPayload', 'message': 'Invalid value for purpose.'}}

    [BUG] Issue with ...

    Solution Accelerators
    This repository contains multiple solution accelerators. Please tell us which ones are involved in your report. (Replace the space in between square brackets with an x)

    • Cognitive Services Landing Zone
    • Semantic Kernel Bot
    • Azure ML Operationalization
    • Edge AI

    Describe the bug
    A clear and concise description of what the bug is.

    To Reproduce
    Steps to reproduce the behavior:

    1. Go to '...'
    2. Click on '....'
    3. Scroll down to '....'
    4. See error

    Expected behavior
    A clear and concise description of what you expected to happen.

    Screenshots
    If applicable, add screenshots to help explain your problem.

    Please complete the following information:

    • OS: [e.g. Windows]
    • Browser [e.g. chrome, safari]
    • Version [e.g. 22]

    Additional context
    Add any other context about the problem here.

    Deploying to Azure, the Test in Web Chat gives an HTTP Status Internal Server Error

    Solution Accelerators
    This repository contains multiple solution accelerators. Please tell us which ones are involved in your report. (Replace the space in between square brackets with an x)

    • Cognitive Services Landing Zone
    • Semantic Kernel Bot
    • Azure ML Operationalization
    • Edge AI

    Describe the bug
    A clear and concise description of what the bug is.

    To Reproduce
    Steps to reproduce the behavior:

    1. Go to '...'
    2. Click on '....'
    3. Scroll down to '....'
    4. See error

    Expected behavior
    A clear and concise description of what you expected to happen.

    Screenshots
    If applicable, add screenshots to help explain your problem.

    Please complete the following information:

    • OS: [e.g. Windows]
    • Browser [e.g. chrome, safari]
    • Version [e.g. 22]

    Additional context
    Add any other

    Screenshot 2024-02-12 at 18 41 42

    [BUG] Issue with ...ResourceGroupNotFound:

    HI Team,

    does anyone face the RG group name is not found that during the deployment. My RG is being created already.

    you can view detailed progress in the Azure Portal:

    (✓) Done: Resource group: rg-ai-in-a-box-cj
    (✓) Done: Azure OpenAI: cog-oa-ai-in-a-box-cj-xkp
    (✓) Done: Azure Cosmos DB: cosmos-ai-in-a-box-cj-xkp

    ERROR: deployment failed: failing invoking action 'provision', error deploying infrastructure: deploying to subscription:

    Deployment Error Details:
    ResourceGroupNotFound: Resource group 'rg-ai-in-a-box-cj' could not be found.

    TraceID: 78e963b656aff334291ade67d3c8492b
    PS C:\My Work\12. AI Demo\AI-in-a-Box\semantic-kernel-bot-in-a-box>

    [BUG] Issue with ...

    Solution Accelerators
    This repository contains multiple solution accelerators. Please tell us which ones are involved in your report. (Replace the space in between square brackets with an x)

    • Cognitive Services Landing Zone
    • Semantic Kernel Bot
    • Azure ML Operationalization
    • Edge AI

    Describe the bug
    I get the following error when I try to run the bot on emulator
    y
    image

    [BUG] the DALL-E plugin does not work.

    Solution Accelerators
    This repository contains multiple solution accelerators. Please tell us which ones are involved in your report. (Replace the space in between square brackets with an x)

    • Cognitive Services Landing Zone
    • [ x ] Semantic Kernel Bot
    • Azure ML Operationalization
    • Edge AI

    Describe the bug
    The DALL-E plugin does not work.

    image

    Request to update Bing resource and Semantic Kernel Library version

    Solution Accelerators
    This repository contains multiple solution accelerators. Please tell us which ones are involved in your report. (Replace the space in between square brackets with an x)

    • Cognitive Services Landing Zone
    • [ X] Semantic Kernel Bot
    • Azure ML Operationalization
    • Edge AI

    Describe the bug
    Bing bicep use an old configuration of bing that is deprecated. Right now bing is under cognitiveservices. Also semantic kernel libraries are not updated that means that new feature like auto invoke of functions are not available

    To Reproduce
    Steps to reproduce the behavior:

    1. Go to '...'
    2. Click on '....'
    3. Scroll down to '....'
    4. See error

    Expected behavior
    A clear and concise description of what you expected to happen.

    Screenshots
    If applicable, add screenshots to help explain your problem.

    Please complete the following information:

    • OS: [e.g. Windows]
    • Browser [e.g. chrome, safari]
    • Version [e.g. 22]

    Additional context
    Add any other context about the problem here.

    gpt-video-analysis-in-a-box error "Operation on target Copy GPT4 Response to Cosmos failed"

    When running the ADF orchstratorGetandAnalyzeVideos pipeline, each video childAnalyzeVideo activity fails with:
     
    Operation on target Copy GPT4 Response to Cosmos failed: Failure happened on 'Sink' side. ErrorCode=CosmosDbSqlApiOperationFailed,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=CosmosDbSqlApi operation Failed. ErrorMessage: Response status code does not indicate success: BadRequest (400); Substatus: 0; ActivityId: ; Reason: ();.,Source=Microsoft.DataTransfer.ClientLibrary.CosmosDbSqlApiV3,''Type=Microsoft.Azure.Cosmos.CosmosException,Message=Response status code does not indicate success: BadRequest (400); Substatus: 0; ActivityId: ; Reason: ();,Source=Microsoft.Azure.Cosmos.Client,'

    Note: Received the following warnings during deployment:
    C:\Projects\AI-in-a-Box\ai-services\gpt-video-analysis-in-a-box\infra\modules\adfpipelines.bicep(12,33) : Warning no-hardcoded-env-urls: Environment URLs should not be hardcoded. Use the environment() function to ensure compatibility across clouds. Found this disallowed host: "vault.azure.net" [https://aka.ms/bicep/linter/no-hardcoded-env-urls]
    C:\Projects\AI-in-a-Box\ai-services\gpt-video-analysis-in-a-box\infra\modules\adfpipelines.bicep(74,7) : Warning use-resource-id-functions: If property "aadResourceId" represents a resource ID, it must use a symbolic resource reference, be a parameter or start with one of these functions: extensionResourceId, guid, if, reference, resourceId, subscription, subscriptionResourceId, tenantResourceId. [https://aka.ms/bicep/linter/use-resource-id-functions]
    C:\Projects\AI-in-a-Box\ai-services\gpt-video-analysis-in-a-box\infra\modules\adfpipelines.bicep(595,42) : Warning BCP036: The property "Ocp-Apim-Subscription-Key" expected a value of type "string" but the provided value is of type "object".
    C:\Projects\AI-in-a-Box\ai-services\gpt-video-analysis-in-a-box\infra\modules\adfpipelines.bicep(649,42) : Warning BCP036: The property "Ocp-Apim-Subscription-Key" expected a value of type "string" but the provided value is of type "object".
    C:\Projects\AI-in-a-Box\ai-services\gpt-video-analysis-in-a-box\infra\modules\adfpipelines.bicep(735,48) : Warning BCP036: The property "Ocp-Apim-Subscription-Key" expected a value of type "string" but the provided value is of type "object".
    C:\Projects\AI-in-a-Box\ai-services\gpt-video-analysis-in-a-box\infra\modules\adfpipelines.bicep(831,30) : Warning BCP036: The property "api-key" expected a value of type "string" but the provided value is of type "object".

    [BUG] Semantic Kernel Bot in-a-box document upload is not working

    Solution Accelerators
    This repository contains multiple solution accelerators. Please tell us which ones are involved in your report. (Replace the space in between square brackets with an x)

    • Cognitive Services Landing Zone
    • [ X] Semantic Kernel Bot
    • Azure ML Operationalization
    • Edge AI

    Describe the bug
    After Semantic Kernel Bot in-a-box solution is deployed or in local mode it doesn't allow for uploading of documents to chat against.

    Document upload not supported as no Document Intelligence endpoint was provided

    Additional context
    I am creating a pull request to resolve this issue

    [BUG] Issue with ...

    Solution Accelerators
    This repository contains multiple solution accelerators. Please tell us which ones are involved in your report. (Replace the space in between square brackets with an x)

    • Cognitive Services Landing Zone
    • Semantic Kernel Bot
    • Azure ML Operationalization
    • Edge AI

    what should be added in SEARCH_SEMANTIC_CONFIG, DIRECT_LINE_SECRET and would you recommend format for creating index?

    [BUG] Issue with ... gen-ai/Assistants/api-in-a-box/assistant-math_tutor/example

    Solution Accelerators
    This repository contains multiple solution accelerators. Please tell us which ones are involved in your report. (Replace the space in between square brackets with an x)

    • Cognitive Services Landing Zone
    • Semantic Kernel Bot
    • Azure ML Operationalization
    • Edge AI
    • Gen AI

    Describe the bug
    Im getting a resource not found when trying to connect to the model. I followed the .env example and validated that the parameters were getting read properly.

    OPENAI_URI=https://NAME.openai.azure.com/
    BASE_URL=https://NAME.openai.azure.com/openai
    OPENAI_KEY=xxxx
    OPENAI_VERSION=2023-07-01-preview
    OPENAI_GPT_DEPLOYMENT=gpt-4

    These were my parameters. I named the deployment the same as the model. I tried models in US East2 and Sweden Central.
    To Reproduce
    Steps to reproduce the behavior:

    1. Go to (https://github.com/Azure/AI-in-a-Box/tree/main/gen-ai/Assistants/api-in-a-box/math_tutor)
    2. Clone it to your local machine
    3. Install the packages from requirements.txt
    4. Create the env file based on the example provided and put it in the api-in-a-box folder
    5. Run all till the cell "Create an Assistant and a Thread". Thats where im getting the error
    6. This is the error:

    NotFoundError Traceback (most recent call last)
    Cell In[16], line 1
    ----> 1 assistant = client.beta.assistants.create(
    2 name="Math Tutor",
    3 instructions="You are a personal math tutor. Write and run code to answer math questions.",
    4 tools=[{"type": "code_interpreter"}],
    5 model=api_deployment_name,
    6 )
    8 thread = client.beta.threads.create()

    File c:\Users\testuser\AppData\Local\anaconda3\envs\Assistants\Lib\site-packages\openai\resources\beta\assistants\assistants.py:108, in Assistants.create(self, model, description, file_ids, instructions, metadata, name, tools, extra_headers, extra_query, extra_body, timeout)
    70 """
    71 Create an assistant with a model and instructions.
    72
    (...)
    105 timeout: Override the client-level default timeout for this request, in seconds
    106 """
    107 extra_headers = {"OpenAI-Beta": "assistants=v1", **(extra_headers or {})}
    --> 108 return self._post(
    109 "/assistants",
    110 body=maybe_transform(
    111 {
    112 "model": model,
    113 "description": description,
    114 "file_ids": file_ids,
    115 "instructions": instructions,
    116 "metadata": metadata,
    117 "name": name,
    118 "tools": tools,
    119 },
    120 assistant_create_params.AssistantCreateParams,
    121 ),
    122 options=make_request_options(
    123 extra_headers=extra_headers, extra_query=extra_query, extra_body=extra_body, timeout=timeout
    124 ),
    125 cast_to=Assistant,
    126 )

    File c:\Users\testuser\AppData\Local\anaconda3\envs\Assistants\Lib\site-packages\openai_base_client.py:1200, in SyncAPIClient.post(self, path, cast_to, body, options, files, stream, stream_cls)
    1186 def post(
    1187 self,
    1188 path: str,
    (...)
    1195 stream_cls: type[_StreamT] | None = None,
    1196 ) -> ResponseT | _StreamT:
    1197 opts = FinalRequestOptions.construct(
    1198 method="post", url=path, json_data=body, files=to_httpx_files(files), **options
    1199 )
    -> 1200 return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))

    File c:\Users\testuser\AppData\Local\anaconda3\envs\Assistants\Lib\site-packages\openai_base_client.py:889, in SyncAPIClient.request(self, cast_to, options, remaining_retries, stream, stream_cls)
    880 def request(
    881 self,
    882 cast_to: Type[ResponseT],
    (...)
    887 stream_cls: type[_StreamT] | None = None,
    888 ) -> ResponseT | _StreamT:
    --> 889 return self._request(
    890 cast_to=cast_to,
    891 options=options,
    892 stream=stream,
    893 stream_cls=stream_cls,
    894 remaining_retries=remaining_retries,
    895 )

    File c:\Users\testuser\AppData\Local\anaconda3\envs\Assistants\Lib\site-packages\openai_base_client.py:980, in SyncAPIClient._request(self, cast_to, options, remaining_retries, stream, stream_cls)
    977 err.response.read()
    979 log.debug("Re-raising status error")
    --> 980 raise self._make_status_error_from_response(err.response) from None
    982 return self._process_response(
    983 cast_to=cast_to,
    984 options=options,
    (...)
    987 stream_cls=stream_cls,
    988 )

    NotFoundError: Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}}

    Expected behavior
    I expected no errors when creating an assistant and thread

    Screenshots
    image

    Please complete the following information:

    • OS: Windows

    Additional context
    I ran this code in VS Code

    Action required: migrate or opt-out of migration to GitHub inside Microsoft

    Migrate non-Open Source or non-External Collaboration repositories to GitHub inside Microsoft

    In order to protect and secure Microsoft, private or internal repositories in GitHub for Open Source which are not related to open source projects or require collaboration with 3rd parties (customer, partners, etc.) must be migrated to GitHub inside Microsoft a.k.a GitHub Enterprise Cloud with Enterprise Managed User (GHEC EMU).

    Action

    ✍️ Please RSVP to opt-in or opt-out of the migration to GitHub inside Microsoft.

    ❗Only users with admin permission in the repository are allowed to respond. Failure to provide a response will result to your repository getting automatically archived.🔒

    Instructions

    Reply with a comment on this issue containing one of the following optin or optout command options below.

    ✅ Opt-in to migrate

    @gimsvc optin --date <target_migration_date in mm-dd-yyyy format>
    

    Example: @gimsvc optin --date 03-15-2023

    OR

    ❌ Opt-out of migration

    @gimsvc optout --reason <staging|collaboration|delete|other>
    

    Example: @gimsvc optout --reason staging

    Options:

    • staging : This repository will ship as Open Source or go public
    • collaboration : Used for external or 3rd party collaboration with customers, partners, suppliers, etc.
    • delete : This repository will be deleted because it is no longer needed.
    • other : Other reasons not specified

    Need more help? 🖐️

    [BUG] Issue with Assistant Bot unable to upload files.

    Solution Accelerators
    This repository contains multiple solution accelerators. Please tell us which ones are involved in your report. (Replace the space in between square brackets with an x)

    • Cognitive Services Landing Zone
    • Semantic Kernel Bot
    • Azure ML Operationalization
    • Edge AI
    • Assistant Bot

    Describe the bug
    I am using the Assistant/bot-in-a-box [project,(https://github.com/Azure/AI-in-a-Box/tree/main/gen-ai/Assistants/bot-in-a-box) When I go to test in the bot in "Test web chat" inside the Azure resource by uploading a sample file (have tried csv, pdf) I only get the following error:

    The bot encountered an error or bug.
    
    To continue to run this bot, please fix the bot source code.
    
    Object reference not set to an instance of an object.
    

    To Reproduce
    Steps to reproduce the behavior:

    1. Go to "Test in Web Chat" for the bot
    2. Attempt to upload a file
    3. See error

    Expected behavior
    The file is correctly uploaded using the code interpreter tooling.

    Screenshots
    Screen Shot 2024-03-06 at 4 02 26 PM

    Please complete the following information:

    • OS: Mac Monterrey 12.5.1
    • Browser Chrome
    • Version 121.0.6167.160 (Official Build) (arm64)

    Additional context
    I am using an unmodified version from the sample code.

    [BUG] Issue with Assistant Bot deploying to azure. Works with Bot Framework Emulator but in Test Web Chat gives an 500 Internal Error

    Solution Accelerators
    This repository contains multiple solution accelerators. Please tell us which ones are involved in your report. (Replace the space in between square brackets with an x)

    • Cognitive Services Landing Zone
    • Semantic Kernel Bot
    • Azure ML Operationalization
    • Edge AI
    • Assistant Bot

    Describe the bug
    I am using the Assistant/bot-in-a-box [project,(https://github.com/Azure/AI-in-a-Box/tree/main/gen-ai/Assistants/bot-in-a-box) This works for me locally using Bot Framework Emulator but when I go to test in the bot in "Test web chat" inside the Azure resource it does not work.

    Expected behavior
    I hope you will connect to the Assistant and answer the questions.

    Screenshots
    This is the error I got from the app service log stream reporting about permissions issues.
    image

    Deployment Parameters:

    • deploySpeech: false
    • SSO_ENABLE: false
    • location: East US 2

    I am not sure if you need more info.

    Additional context
    I was comparing between the other project called SemanticKernelBot link and the only difference I found was that in AssistantBot you are not using the BlobServiceClient instead of SemanticKernelBot in the file Startup.cs
    image

    Website with given name func-aibx-mlw already exists - MLOps in-abox

    Hi Team,

    in the steps of Create Azure ML Workspace - 01-create-workspace.yml

    build faced keep showing the error or the below build
    Process completed with exit code 1.

    where i can add maul change of - pre-define "Website with given name func-aibx-mlw already exists." ??

    B. Rgds

    ChanJIn

    ERROR: "status":"Failed","error":"code":"DeploymentFailed","target":"/subscriptions/xxxx/resourceGroups/aibx-ml-cj/providers/Microsoft.Resources/deployments/main","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/arm-deployment-operations for usage details.","details":["code":"ResourceDeploymentFailure","target":"/subscriptions//resourceGroups/aibx-ml-cj/providers/Microsoft.Resources/deployments/func-aibx-mlw","message":"The resource write operation failed to complete successfully, because it reached terminal provisioning state 'Failed'.","details":["code":"DeploymentFailed","target":"/subscriptions//resourceGroups/aibx-ml-cj/providers/Microsoft.Resources/deployments/func-aibx-mlw","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/arm-deployment-operations for usage details.","details":["code":"Conflict","target":"/subscriptions//resourceGroups/aibx-ml-cj/providers/Microsoft.Web/sites/func-aibx-mlw","message":"\r\n "Code": "Conflict",\r\n "Message": "Website with given name func-aibx-mlw already exists.",\r\n "Target": null,\r\n "Details": [\r\n ***\r\n "Message": "Website with given name func-aibx-mlw already exists."\r\n ***,\r\n ***\r\n "Code": "Conflict"\r\n ,\r\n \r\n "ErrorEntity": \r\n "ExtendedCode": "54001",\r\n "MessageTemplate": "Website with given name 0 already exists.",\r\n "Parameters": [\r\n "func-aibx-mlw"\r\n ],\r\n "Code": "Conflict",\r\n "Message": "Website with given name func-aibx-mlw already exists."\r\n \r\n \r\n ],\r\n "Innererror": null\r\n"]]]

    [BOX REQUEST] Pattern for leveraging AI Services in Batch

    Request for a new Box for processing batches of data through one or more AI Services.

    Example scenarios:

    • Pulling data from Sharepoint API into Blob Storage and indexing into AI Search for RAG Pattern solution;
    • Running audio files through Speech to extract transcriptions
    • Running video files through Video Indexer

    Proposed solution:

    • A Data Factory workspace with sample pipelines for batch processing of data through different AI Services
    • Standardized "azd up" provisioning and deployment
    • Customizable set of AI Services, or a multi-service account

    Recommend Projects

    • React photo React

      A declarative, efficient, and flexible JavaScript library for building user interfaces.

    • Vue.js photo Vue.js

      🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

    • Typescript photo Typescript

      TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

    • TensorFlow photo TensorFlow

      An Open Source Machine Learning Framework for Everyone

    • Django photo Django

      The Web framework for perfectionists with deadlines.

    • D3 photo D3

      Bring data to life with SVG, Canvas and HTML. 📊📈🎉

    Recommend Topics

    • javascript

      JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

    • web

      Some thing interesting about web. New door for the world.

    • server

      A server is a program made to process requests and deliver data to clients.

    • Machine learning

      Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

    • Game

      Some thing interesting about game, make everyone happy.

    Recommend Org

    • Facebook photo Facebook

      We are working to build community through open source technology. NB: members must have two-factor auth.

    • Microsoft photo Microsoft

      Open source projects and samples from Microsoft.

    • Google photo Google

      Google ❤️ Open Source for everyone.

    • D3 photo D3

      Data-Driven Documents codes.