azure / ai-in-a-box Goto Github PK
View Code? Open in Web Editor NEWLicense: MIT License
License: MIT License
HI Team,
does anyone face the RG group name is not found that during the deployment. My RG is being created already.
you can view detailed progress in the Azure Portal:
(✓) Done: Resource group: rg-ai-in-a-box-cj
(✓) Done: Azure OpenAI: cog-oa-ai-in-a-box-cj-xkp
(✓) Done: Azure Cosmos DB: cosmos-ai-in-a-box-cj-xkp
ERROR: deployment failed: failing invoking action 'provision', error deploying infrastructure: deploying to subscription:
Deployment Error Details:
ResourceGroupNotFound: Resource group 'rg-ai-in-a-box-cj' could not be found.
TraceID: 78e963b656aff334291ade67d3c8492b
PS C:\My Work\12. AI Demo\AI-in-a-Box\semantic-kernel-bot-in-a-box>
Cant deploy because of this
In order to protect and secure Microsoft, private
or internal
repositories in GitHub for Open Source which are not related to open source projects or require collaboration with 3rd parties (customer, partners, etc.) must be migrated to GitHub inside Microsoft a.k.a GitHub Enterprise Cloud with Enterprise Managed User (GHEC EMU).
✍️ Please RSVP to opt-in or opt-out of the migration to GitHub inside Microsoft.
❗Only users with admin
permission in the repository are allowed to respond. Failure to provide a response will result to your repository getting automatically archived.🔒
Reply with a comment on this issue containing one of the following optin
or optout
command options below.
✅ Opt-in to migrate
@gimsvc optin --date <target_migration_date in mm-dd-yyyy format>
Example:
@gimsvc optin --date 03-15-2023
OR
❌ Opt-out of migration
@gimsvc optout --reason <staging|collaboration|delete|other>
Example:
@gimsvc optout --reason staging
Options:
staging
: This repository will ship as Open Source or gopublic
collaboration
: Used for external or 3rd party collaboration with customers, partners, suppliers, etc.delete
: This repository will be deleted because it is no longer needed.other
: Other reasons not specified
Solution Accelerators
This repository contains multiple solution accelerators. Please tell us which ones are involved in your report. (Replace the space in between square brackets with an x)
Describe the bug
When running the solution locally - I get an error:
Invalid value for 'content': expected a string, got null.\r\nStatus: 400 (model_error)\r\n\r\nContent:\r\n{\n "error": {\n "message": "Invalid value for 'content': expected a string, got null.",\n "type": "invalid_request_error",\n "param": "messages.[8].content",\n "code": null\n }\n}\n\r\n\r\nHeaders:\r\nAccess-Control-Allow-Origin: REDACTED\r\nX-Content-Type-Options: REDACTED\r\nx-ratelimit-remaining-requests: REDACTED\r\napim-request-id: REDACTED\r\nx-ratelimit-remaining-tokens: REDACTED\r\nX-Request-ID: REDACTED\r\nms-azureml-model-error-reason: REDACTED\r\nms-azureml-model-error-statuscode: REDACTED\r\nx-ms-client-request-id: e1043b7c-6211-442f-846d-0c34e7a3fe57\r\nx-ms-region: REDACTED\r\nazureml-model-session: REDACTED\r\nStrict-Transport-Security: REDACTED\r\nDate: Wed, 24 Jan 2024 09:39:45 GMT\r\nContent-Length: 188\r\nContent-Type: application/json\r\n"}
To Reproduce
Steps to reproduce the behavior:
Not the best practice to have requirement file without version number.
For example, the following works on with specific version of openai:
from openai.types.beta.threads.message_content_text import MessageContentText
I have to check when the requirement file is committed into Git, to find the correct version.
In my case, downgrade openai to 1.11.1
Solution Accelerators
This repository contains multiple solution accelerators. Please tell us which ones are involved in your report. (Replace the space in between square brackets with an x)
Describe the bug
I am using the Assistant/bot-in-a-box [project,(https://github.com/Azure/AI-in-a-Box/tree/main/gen-ai/Assistants/bot-in-a-box) When I go to test in the bot in "Test web chat" inside the Azure resource by uploading a sample file (have tried csv, pdf) I only get the following error:
The bot encountered an error or bug.
To continue to run this bot, please fix the bot source code.
Object reference not set to an instance of an object.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
The file is correctly uploaded using the code interpreter tooling.
Please complete the following information:
Additional context
I am using an unmodified version from the sample code.
Solution Accelerators
This repository contains multiple solution accelerators. Please tell us which ones are involved in your report. (Replace the space in between square brackets with an x)
Describe the bug
Im getting a resource not found when trying to connect to the model. I followed the .env example and validated that the parameters were getting read properly.
OPENAI_URI=https://NAME.openai.azure.com/
BASE_URL=https://NAME.openai.azure.com/openai
OPENAI_KEY=xxxx
OPENAI_VERSION=2023-07-01-preview
OPENAI_GPT_DEPLOYMENT=gpt-4
These were my parameters. I named the deployment the same as the model. I tried models in US East2 and Sweden Central.
To Reproduce
Steps to reproduce the behavior:
NotFoundError Traceback (most recent call last)
Cell In[16], line 1
----> 1 assistant = client.beta.assistants.create(
2 name="Math Tutor",
3 instructions="You are a personal math tutor. Write and run code to answer math questions.",
4 tools=[{"type": "code_interpreter"}],
5 model=api_deployment_name,
6 )
8 thread = client.beta.threads.create()
File c:\Users\testuser\AppData\Local\anaconda3\envs\Assistants\Lib\site-packages\openai\resources\beta\assistants\assistants.py:108, in Assistants.create(self, model, description, file_ids, instructions, metadata, name, tools, extra_headers, extra_query, extra_body, timeout)
70 """
71 Create an assistant with a model and instructions.
72
(...)
105 timeout: Override the client-level default timeout for this request, in seconds
106 """
107 extra_headers = {"OpenAI-Beta": "assistants=v1", **(extra_headers or {})}
--> 108 return self._post(
109 "/assistants",
110 body=maybe_transform(
111 {
112 "model": model,
113 "description": description,
114 "file_ids": file_ids,
115 "instructions": instructions,
116 "metadata": metadata,
117 "name": name,
118 "tools": tools,
119 },
120 assistant_create_params.AssistantCreateParams,
121 ),
122 options=make_request_options(
123 extra_headers=extra_headers, extra_query=extra_query, extra_body=extra_body, timeout=timeout
124 ),
125 cast_to=Assistant,
126 )
File c:\Users\testuser\AppData\Local\anaconda3\envs\Assistants\Lib\site-packages\openai_base_client.py:1200, in SyncAPIClient.post(self, path, cast_to, body, options, files, stream, stream_cls)
1186 def post(
1187 self,
1188 path: str,
(...)
1195 stream_cls: type[_StreamT] | None = None,
1196 ) -> ResponseT | _StreamT:
1197 opts = FinalRequestOptions.construct(
1198 method="post", url=path, json_data=body, files=to_httpx_files(files), **options
1199 )
-> 1200 return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File c:\Users\testuser\AppData\Local\anaconda3\envs\Assistants\Lib\site-packages\openai_base_client.py:889, in SyncAPIClient.request(self, cast_to, options, remaining_retries, stream, stream_cls)
880 def request(
881 self,
882 cast_to: Type[ResponseT],
(...)
887 stream_cls: type[_StreamT] | None = None,
888 ) -> ResponseT | _StreamT:
--> 889 return self._request(
890 cast_to=cast_to,
891 options=options,
892 stream=stream,
893 stream_cls=stream_cls,
894 remaining_retries=remaining_retries,
895 )
File c:\Users\testuser\AppData\Local\anaconda3\envs\Assistants\Lib\site-packages\openai_base_client.py:980, in SyncAPIClient._request(self, cast_to, options, remaining_retries, stream, stream_cls)
977 err.response.read()
979 log.debug("Re-raising status error")
--> 980 raise self._make_status_error_from_response(err.response) from None
982 return self._process_response(
983 cast_to=cast_to,
984 options=options,
(...)
987 stream_cls=stream_cls,
988 )
NotFoundError: Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}}
Expected behavior
I expected no errors when creating an assistant and thread
Please complete the following information:
Additional context
I ran this code in VS Code
Solution Accelerators
This repository contains multiple solution accelerators. Please tell us which ones are involved in your report. (Replace the space in between square brackets with an x)
Describe the bug
Tried deploying the solution multiple times following the instructions but the bot doesn't work. I always get the same error message, whatever the prompt I enter.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
A valid answer.
Solution Accelerators
This repository contains multiple solution accelerators. Please tell us which ones are involved in your report. (Replace the space in between square brackets with an x)
what should be added in SEARCH_SEMANTIC_CONFIG, DIRECT_LINE_SECRET and would you recommend format for creating index?
Solution Accelerators
This repository contains multiple solution accelerators. Please tell us which ones are involved in your report. (Replace the space in between square brackets with an x)
Describe the bug
A clear and concise description of what the bug is.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
A clear and concise description of what you expected to happen.
Screenshots
If applicable, add screenshots to help explain your problem.
Please complete the following information:
Additional context
Add any other
Solution Accelerators
This repository contains multiple solution accelerators. Please tell us which ones are involved in your report. (Replace the space in between square brackets with an x)
Describe the bug
A clear and concise description of what the bug is.
I am getting the below error in running the gen-ai/Assistants/notebooks/autogen/gpt_assistant_agent.ipynb file , the code block i am running is below
code block
+++++++++++++++
assistant_config = {
"tools": [
{"type": "code_interpreter"}
],
"tool_resources": {
"code_interpreter": {
"file_ids": [file.id]
}
}
}
excel_oai_agent = GPTAssistantAgent(
name="excel_oai_agent",
instructions="You are a code assistant tool that can run Python code",
llm_config=llm_config,
assistant_config=assistant_config,
)
user_proxy.initiate_chat(recipient=excel_oai_agent,
message="What are the columns in the excel spreadsheet uploaded?"
++++++++++++
BadRequestError: Error code: 400 - {'error': {'message': "Unknown parameter: 'tool_resources'.", 'type': 'invalid_request_error', 'param': 'tool_resources', 'code': 'unknown_parameter'}}
BadRequestError Traceback (most recent call last)
Cell In[11], line 12
1 assistant_config = {
2 "tools": [
3 {"type": "code_interpreter"}
(...)
9 }
10 }
---> 12 excel_oai_agent = GPTAssistantAgent(
13 name="excel_oai_agent",
14 instructions="You are a code assistant tool that can run Python code",
15 llm_config=llm_config,
16 assistant_config=assistant_config,
17 )
19 user_proxy.initiate_chat(recipient=excel_oai_agent,
20 message="What are the columns in the excel spreadsheet uploaded?"
21 )
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/autogen/agentchat/contrib/gpt_assistant_agent.py:104, in GPTAssistantAgent.init(self, name, instructions, llm_config, assistant_config, overwrite_instructions, overwrite_tools, **kwargs)
100 logger.warning(
101 "No instructions were provided for new assistant. Using default instructions from AssistantAgent.DEFAULT_SYSTEM_MESSAGE."
102 )
103 instructions = AssistantAgent.DEFAULT_SYSTEM_MESSAGE
--> 104 self._openai_assistant = create_gpt_assistant(
105 self._openai_client,
106 name=name,
107 instructions=instructions,
108 model=model_name,
109 assistant_config=openai_assistant_cfg,
110 )
111 else:
112 logger.warning(
113 "Matching assistant found, using the first matching assistant: %s",
114 candidate_assistants[0].dict,
115 )
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/autogen/oai/openai_utils.py:762, in create_gpt_assistant(client, name, instructions, model, assistant_config)
759 assistant_create_kwargs["file_ids"] = assistant_config.get("file_ids", [])
761 logging.info(f"Creating assistant with config: {assistant_create_kwargs}")
--> 762 return client.beta.assistants.create(name=name, instructions=instructions, model=model, **assistant_create_kwargs)
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/openai/resources/beta/assistants.py:156, in Assistants.create(self, model, description, instructions, metadata, name, response_format, temperature, tool_resources, tools, top_p, extra_headers, extra_query, extra_body, timeout)
90 """
91 Create an assistant with a model and instructions.
92
(...)
153 timeout: Override the client-level default timeout for this request, in seconds
154 """
155 extra_headers = {"OpenAI-Beta": "assistants=v2", **(extra_headers or {})}
--> 156 return self._post(
157 "/assistants",
158 body=maybe_transform(
159 {
160 "model": model,
161 "description": description,
162 "instructions": instructions,
163 "metadata": metadata,
164 "name": name,
165 "response_format": response_format,
166 "temperature": temperature,
167 "tool_resources": tool_resources,
168 "tools": tools,
169 "top_p": top_p,
170 },
171 assistant_create_params.AssistantCreateParams,
172 ),
173 options=make_request_options(
174 extra_headers=extra_headers, extra_query=extra_query, extra_body=extra_body, timeout=timeout
175 ),
176 cast_to=Assistant,
177 )
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/openai/_base_client.py:1240, in SyncAPIClient.post(self, path, cast_to, body, options, files, stream, stream_cls)
1226 def post(
1227 self,
1228 path: str,
(...)
1235 stream_cls: type[_StreamT] | None = None,
1236 ) -> ResponseT | _StreamT:
1237 opts = FinalRequestOptions.construct(
1238 method="post", url=path, json_data=body, files=to_httpx_files(files), **options
1239 )
-> 1240 return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/openai/_base_client.py:921, in SyncAPIClient.request(self, cast_to, options, remaining_retries, stream, stream_cls)
912 def request(
913 self,
914 cast_to: Type[ResponseT],
(...)
919 stream_cls: type[_StreamT] | None = None,
920 ) -> ResponseT | _StreamT:
--> 921 return self._request(
922 cast_to=cast_to,
923 options=options,
924 stream=stream,
925 stream_cls=stream_cls,
926 remaining_retries=remaining_retries,
927 )
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/openai/_base_client.py:1020, in SyncAPIClient._request(self, cast_to, options, remaining_retries, stream, stream_cls)
1017 err.response.read()
1019 log.debug("Re-raising status error")
-> 1020 raise self._make_status_error_from_response(err.response) from None
1022 return self._process_response(
1023 cast_to=cast_to,
1024 options=options,
(...)
1027 stream_cls=stream_cls,
1028 )
++++++++
To Reproduce
Steps to reproduce the behavior:
excel_oai_agent = GPTAssistantAgent(
name="excel_oai_agent",
instructions="You are a code assistant tool that can run Python code",
llm_config=llm_config,
assistant_config=assistant_config,
)
user_proxy.initiate_chat(recipient=excel_oai_agent,
message="What are the columns in the excel spreadsheet uploaded?"..'
Expected behavior
A clear and concise description of what you expected to happen.
Screenshots
If applicable, add screenshots to help explain your problem.
Please complete the following information:
Additional context
I am installing all packages in requirement.txt and using azure gpt4-0125 version
When running the ADF orchstratorGetandAnalyzeVideos pipeline, each video childAnalyzeVideo activity fails with:
Operation on target Copy GPT4 Response to Cosmos failed: Failure happened on 'Sink' side. ErrorCode=CosmosDbSqlApiOperationFailed,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=CosmosDbSqlApi operation Failed. ErrorMessage: Response status code does not indicate success: BadRequest (400); Substatus: 0; ActivityId: ; Reason: ();.,Source=Microsoft.DataTransfer.ClientLibrary.CosmosDbSqlApiV3,''Type=Microsoft.Azure.Cosmos.CosmosException,Message=Response status code does not indicate success: BadRequest (400); Substatus: 0; ActivityId: ; Reason: ();,Source=Microsoft.Azure.Cosmos.Client,'
Note: Received the following warnings during deployment:
C:\Projects\AI-in-a-Box\ai-services\gpt-video-analysis-in-a-box\infra\modules\adfpipelines.bicep(12,33) : Warning no-hardcoded-env-urls: Environment URLs should not be hardcoded. Use the environment() function to ensure compatibility across clouds. Found this disallowed host: "vault.azure.net" [https://aka.ms/bicep/linter/no-hardcoded-env-urls]
C:\Projects\AI-in-a-Box\ai-services\gpt-video-analysis-in-a-box\infra\modules\adfpipelines.bicep(74,7) : Warning use-resource-id-functions: If property "aadResourceId" represents a resource ID, it must use a symbolic resource reference, be a parameter or start with one of these functions: extensionResourceId, guid, if, reference, resourceId, subscription, subscriptionResourceId, tenantResourceId. [https://aka.ms/bicep/linter/use-resource-id-functions]
C:\Projects\AI-in-a-Box\ai-services\gpt-video-analysis-in-a-box\infra\modules\adfpipelines.bicep(595,42) : Warning BCP036: The property "Ocp-Apim-Subscription-Key" expected a value of type "string" but the provided value is of type "object".
C:\Projects\AI-in-a-Box\ai-services\gpt-video-analysis-in-a-box\infra\modules\adfpipelines.bicep(649,42) : Warning BCP036: The property "Ocp-Apim-Subscription-Key" expected a value of type "string" but the provided value is of type "object".
C:\Projects\AI-in-a-Box\ai-services\gpt-video-analysis-in-a-box\infra\modules\adfpipelines.bicep(735,48) : Warning BCP036: The property "Ocp-Apim-Subscription-Key" expected a value of type "string" but the provided value is of type "object".
C:\Projects\AI-in-a-Box\ai-services\gpt-video-analysis-in-a-box\infra\modules\adfpipelines.bicep(831,30) : Warning BCP036: The property "api-key" expected a value of type "string" but the provided value is of type "object".
Solution Accelerators
This repository contains multiple solution accelerators. Please tell us which ones are involved in your report. (Replace the space in between square brackets with an x)
Describe the bug
I am using the Assistant/bot-in-a-box [project,(https://github.com/Azure/AI-in-a-Box/tree/main/gen-ai/Assistants/bot-in-a-box) This works for me locally using Bot Framework Emulator but when I go to test in the bot in "Test web chat" inside the Azure resource it does not work.
Expected behavior
I hope you will connect to the Assistant and answer the questions.
Screenshots
This is the error I got from the app service log stream reporting about permissions issues.
Deployment Parameters:
I am not sure if you need more info.
Additional context
I was comparing between the other project called SemanticKernelBot link and the only difference I found was that in AssistantBot you are not using the BlobServiceClient instead of SemanticKernelBot in the file Startup.cs
Hi Team,
in the steps of Create Azure ML Workspace - 01-create-workspace.yml
build faced keep showing the error or the below build
Process completed with exit code 1.
where i can add maul change of - pre-define "Website with given name func-aibx-mlw already exists." ??
B. Rgds
ChanJIn
ERROR: "status":"Failed","error":"code":"DeploymentFailed","target":"/subscriptions/xxxx/resourceGroups/aibx-ml-cj/providers/Microsoft.Resources/deployments/main","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/arm-deployment-operations for usage details.","details":["code":"ResourceDeploymentFailure","target":"/subscriptions//resourceGroups/aibx-ml-cj/providers/Microsoft.Resources/deployments/func-aibx-mlw","message":"The resource write operation failed to complete successfully, because it reached terminal provisioning state 'Failed'.","details":["code":"DeploymentFailed","target":"/subscriptions//resourceGroups/aibx-ml-cj/providers/Microsoft.Resources/deployments/func-aibx-mlw","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/arm-deployment-operations for usage details.","details":["code":"Conflict","target":"/subscriptions//resourceGroups/aibx-ml-cj/providers/Microsoft.Web/sites/func-aibx-mlw","message":"\r\n "Code": "Conflict",\r\n "Message": "Website with given name func-aibx-mlw already exists.",\r\n "Target": null,\r\n "Details": [\r\n ***\r\n "Message": "Website with given name func-aibx-mlw already exists."\r\n ***,\r\n ***\r\n "Code": "Conflict"\r\n ,\r\n \r\n "ErrorEntity": \r\n "ExtendedCode": "54001",\r\n "MessageTemplate": "Website with given name 0 already exists.",\r\n "Parameters": [\r\n "func-aibx-mlw"\r\n ],\r\n "Code": "Conflict",\r\n "Message": "Website with given name func-aibx-mlw already exists."\r\n \r\n \r\n ],\r\n "Innererror": null\r\n"]]]
Solution Accelerators
This repository contains multiple solution accelerators. Please tell us which ones are involved in your report. (Replace the space in between square brackets with an x)
Describe the bug
This is not entirely a bug. It is more of the current implementation and question.
Considering that I have a GB or more than that file size to be split. The current implementation is that it would load up into a temp location. How does it behave when a big size of file uploaded as well as the file has many tables/images in it? Is it a suitable function/code for this scenario?
Thanks
Solution Accelerators
This repository contains multiple solution accelerators. Please tell us which ones are involved in your report. (Replace the space in between square brackets with an x)
Describe the bug
After Semantic Kernel Bot in-a-box solution is deployed or in local mode it doesn't allow for uploading of documents to chat against.
Document upload not supported as no Document Intelligence endpoint was provided
Additional context
I am creating a pull request to resolve this issue
Request for a new Box for processing batches of data through one or more AI Services.
Example scenarios:
Proposed solution:
Getting following error when I try to run in my machine.
File ~/anaconda3/envs/py310/lib/python3.10/site-packages/openai/_base_client.py:898, in SyncAPIClient._request(self, cast_to, options, remaining_retries, stream, stream_cls)
895 if not err.response.is_closed:
896 err.response.read()
--> 898 raise self._make_status_error_from_response(err.response) from None
899 except httpx.TimeoutException as err:
900 if response is not None:
BadRequestError: Error code: 400 - {'error': {'code': 'invalidPayload', 'message': 'Invalid value for purpose.'}}
Solution Accelerators
This repository contains multiple solution accelerators. Please tell us which ones are involved in your report. (Replace the space in between square brackets with an x)
Describe the bug
The DALL-E plugin does not work.
HI Team,
https://github.com/Azure/AI-in-a-Box/tree/main/machine-learning/ml-ops-in-a-box
step machine-learning/ml-ops-in-a-box/documentation/00-set-up.md
step 2. section 1. Create a new public repo by using this repo as a template.
i don't see the "use this template" . the team is enable to this ? or Fork can be option ?
B. Rgds
Chan JIn
Solution Accelerators
This repository contains multiple solution accelerators. Please tell us which ones are involved in your report. (Replace the space in between square brackets with an x)
Describe the bug
Bing bicep use an old configuration of bing that is deprecated. Right now bing is under cognitiveservices. Also semantic kernel libraries are not updated that means that new feature like auto invoke of functions are not available
To Reproduce
Steps to reproduce the behavior:
Expected behavior
A clear and concise description of what you expected to happen.
Screenshots
If applicable, add screenshots to help explain your problem.
Please complete the following information:
Additional context
Add any other context about the problem here.
Solution Accelerators
This repository contains multiple solution accelerators. Please tell us which ones are involved in your report. (Replace the space in between square brackets with an x)
Describe the bug
I get the following error when I try to run the bot on emulator
Solution Accelerators
This repository contains multiple solution accelerators. Please tell us which ones are involved in your report. (Replace the space in between square brackets with an x)
Describe the bug
A clear and concise description of what the bug is.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
A clear and concise description of what you expected to happen.
Screenshots
If applicable, add screenshots to help explain your problem.
Please complete the following information:
Additional context
Add any other context about the problem here.
[Assistant Bot]
Is your feature request related to a problem? Please describe.
I'd like to be able to download files generated/updated by an assistant bot.
Describe the solution you'd like
The bot should be able to provide a download link to the files it creates.
Describe alternatives you've considered
Additional context
Related: #78
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.