Giter Site home page Giter Site logo

bmtools's People

Contributors

achazwl avatar blitherboom812 avatar chenweize1998 avatar congxin95 avatar ellenzzn avatar eltociear avatar hothan01 avatar kunlun-zhu avatar lphkxd avatar ningding97 avatar pooruss avatar qiancheng0 avatar sarah816 avatar shengdinghu avatar thuqinyj16 avatar zhouxh19 avatar zsharon024 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bmtools's Issues

集成llama是否需要正确的openai_key

集成llama是否需要正确的openai_key? 没有提供正确的key时host_local_tools无法正常启动。在bmtools/tools/retriever.py document_embedding = self.embed.embed_documents([document]) 时 会调用langchain 的 embeddings 里的openai.py里面的embedding方法,而错误的openai_key无法实例化model,也就无法embedding,请问需要怎样设置?怎样改变?谢谢!

BabyagiTools.py error (TypeError: metaclass conflict)

When I run this file, I get an error. How can I solve this problem? Please help.

class BabyAGI(Chain, BaseModel):
TypeError: metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases

KeyError: 'type'的问题

当使用map tool的时候,会产生KeyError: 'type'的报错,这是由于bmtools/tools/map/api.py里每个方法的参数没有指定类型,例如:

    @tool.get("/get_coordinates")
    def get_coordinates(location):
        """Get the coordinates of a location"""
        url = BASE_URL + "Locations"
        params = {
            "query": location,
            "key": KEY
        }
        response = requests.get(url, params=params)
        json_data = response.json()
        coordinates = json_data["resourceSets"][0]["resources"][0]["point"]["coordinates"]
        return coordinates,

需要给location添加str的类型,修改后的代码是:

    @tool.get("/get_coordinates")
    def get_coordinates(location: str):
        """Get the coordinates of a location"""
        url = BASE_URL + "Locations"
        params = {
            "query": location,
            "key": KEY
        }
        response = requests.get(url, params=params)
        json_data = response.json()
        coordinates = json_data["resourceSets"][0]["resources"][0]["point"]["coordinates"]
        return coordinates

插件兼容其他模型的可能性

感谢开源的项目。我看源码是和openai的key是强绑定的,有考虑过做一个更加独立的版本吗?
比如兼容ChatGLM,MOSS等大模型?

[BUG] Importerror after setup.py and host_local_tools.py

python version:3.9.7
environment:windows git bash
anyio version: $ pip install anyio --upgrade
Requirement already satisfied: anyio in e:\program files (x86)\anaconda\lib\site-packages (2.2.0)
Requirement already satisfied: sniffio>=1.1 in e:\program files (x86)\anaconda\lib\site-packages (from anyio) (1.2.0)
Requirement already satisfied: idna>=2.8 in e:\program files (x86)\anaconda\lib\site-packages (from anyio) (3.2)

$ python host_local_tools.py
Traceback (most recent call last):
File "C:\Users\51645\Documents\GitHub\BMTools\host_local_tools.py", line 1, in
import bmtools
File "C:\Users\51645\Documents\GitHub\BMTools\bmtools_init_.py", line 1, in
from .tools.serve import ToolServer
File "C:\Users\51645\Documents\GitHub\BMTools\bmtools\tools_init_.py", line 1, in
from . import chemical
File "C:\Users\51645\Documents\GitHub\BMTools\bmtools\tools\chemical_init_.py", line 1, in
from ..registry import register
File "C:\Users\51645\Documents\GitHub\BMTools\bmtools\tools\registry.py", line 1, in
from .tool import Tool
File "C:\Users\51645\Documents\GitHub\BMTools\bmtools\tools\tool.py", line 1, in
import fastapi
File "E:\Program Files (x86)\Anaconda\lib\site-packages\fastapi-0.95.0-py3.9.egg\fastapi_init_.py", line 7, in
from .applications import FastAPI as FastAPI
File "E:\Program Files (x86)\Anaconda\lib\site-packages\fastapi-0.95.0-py3.9.egg\fastapi\applications.py", line 16, in
from fastapi import routing
File "E:\Program Files (x86)\Anaconda\lib\site-packages\fastapi-0.95.0-py3.9.egg\fastapi\routing.py", line 25, in
from fastapi.dependencies.utils import (
File "E:\Program Files (x86)\Anaconda\lib\site-packages\fastapi-0.95.0-py3.9.egg\fastapi\dependencies\utils.py", line 23, in
from fastapi.concurrency import (
File "E:\Program Files (x86)\Anaconda\lib\site-packages\fastapi-0.95.0-py3.9.egg\fastapi\concurrency.py", line 6, in
from anyio import CapacityLimiter
ImportError: cannot import name 'CapacityLimiter' from 'anyio' (E:\Program Files (x86)\Anaconda\lib\site-packages\anyio_init_.py)

web_demo对话报错

Traceback (most recent call last):
File "D:\miniconda3\envs\bmtool\lib\site-packages\gradio\routes.py", line 414, in run_predict
output = await app.get_blocks().process_api(
File "D:\miniconda3\envs\bmtool\lib\site-packages\gradio\blocks.py", line 1320, in process_api
result = await self.call_function(
File "D:\miniconda3\envs\bmtool\lib\site-packages\gradio\blocks.py", line 1064, in call_function
prediction = await utils.async_iteration(iterator)
File "D:\miniconda3\envs\bmtool\lib\site-packages\gradio\utils.py", line 514, in async_iteration
return await iterator.anext()
File "D:\miniconda3\envs\bmtool\lib\site-packages\gradio\utils.py", line 507, in anext
return await anyio.to_thread.run_sync(
File "D:\miniconda3\envs\bmtool\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "D:\miniconda3\envs\bmtool\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "D:\miniconda3\envs\bmtool\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "D:\miniconda3\envs\bmtool\lib\site-packages\gradio\utils.py", line 490, in run_sync_iterator_async
return next(iterator)
File "E:\workspace_shgl\BMTools-main\web_demo.py", line 63, in answer_by_tools
for inter in agent_executor(question):
File "E:\workspace_shgl\BMTools-main\bmtools\agent\executor.py", line 99, in call
self.callback_manager.on_chain_start(
AttributeError: 'Executor' object has no attribute 'callback_manager'

muti_tools_data数据咨询

想请教一下,发布的sft数据集中,mult_tools中 prompt 部分没有给定可使用的 tool 信息,和 single_tool 的数据有些差异
请问 sft 的阶段的数据格式具体是怎么组织的呢?

Operating Environment

Hello, I am a beginner who has just started learning about Tool Learning. I have used CentOS 7 and CentOS Stream 9 to install and run the environment, but I always encounter various errors when installing related dependencies, which makes it difficult for me to complete deployments. Therefore, I would like to ask which version of operating system environment do you use?

Is the evaluation dataset available?

Hi from the original paper https://arxiv.org/pdf/2304.08354.pdf, it mentioned "pecifically, if humans judge that all the API calls are accurate for the given task, and they yield a reasonable result, the task is deemed to be correctly completed. The codes and our curated dataset will be made available to the academic community"

I'm trying to repeat some of the result, curious if this eval dataset is available somewhere?

python host_local_tools.py this step generate error?

when i execute python host_local_tools.py,it appears error as below:
File "C:\Users\Kurtqian\anaconda3\lib\site-packages\pydantic-2.0a3-py3.8.egg\pydantic_internal_generate_schema.py", line 399, in _generate_schema
raise PydanticSchemaGenerationError(
pydantic.errors.PydanticSchemaGenerationError: Unable to generate pydantic-core schema for <class 'fastapi.openapi.models.EmailStr'>. Setting arbitrary_types_allowed=True in the model_config may prevent this error.

For further information visit <TODO: Set up the errors URLs>/schema-for-unknown-type

i just check out the BabyagiTools.py, class Config parameter arbitrary_types_allowed = True already been set, why this error still appears?

Langchain cannot parse final answer

Overview

I've been doing trying to run test.py and multi_test.py and it isn't working properly. Everything works fine up to the final answer, where langchain spits out a
OutputParserException: Could not parse LLM output: 'I now know the final answer.'

To replicate:

  1. Run host_local_tools.py with weather tool enabled
  2. Run test.py

Full output:

[INFO|(BMTools)singletool:79]2023-07-11 17:33:42,949 >>  Using ChatGPT
[INFO|(BMTools)singletool:25]2023-07-11 17:33:42,953 >>  Doc string URL: http://127.0.0.1:8079/tools/weather/openapi.json
[INFO|(BMTools)singletool:34]2023-07-11 17:33:42,954 >>  server_url http://127.0.0.1:8079/tools/weather
[INFO|(BMTools)apitool:146]2023-07-11 17:33:42,955 >>  API Name: get_weather_today
[INFO|(BMTools)apitool:147]2023-07-11 17:33:42,956 >>  API Description: Get today's the weather. Your input should be a json (args json schema): {{"location" : string, }} The Action to trigger this API should be get_weather_today and the input parameters should be a json dict string. Pay attention to the type of parameters.
[INFO|(BMTools)apitool:146]2023-07-11 17:33:42,957 >>  API Name: forecast_weather
[INFO|(BMTools)apitool:147]2023-07-11 17:33:42,958 >>  API Description: Forecast weather in the upcoming days.. Your input should be a json (args json schema): {{"location" : string, "days" : integer, }} The Action to trigger this API should be forecast_weather and the input parameters should be a json dict string. Pay attention to the type of parameters.
[INFO|(BMTools)singletool:93]2023-07-11 17:33:42,959 >>  Tool [weather] has the following apis: [RequestTool(name='get_weather_today', description='Get today\'s the weather. Your input should be a json (args json schema): {{"location" : string, }} The Action to trigger this API should be get_weather_today and the input parameters should be a json dict string. Pay attention to the type of parameters.', args_schema=None, return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x000001D15E393730>, func=<function RequestTool.__init__.<locals>.func at 0x000001D14CE678B0>, afunc=<function RequestTool.__init__.<locals>.afunc at 0x000001D1211E19D0>, coroutine=None, tool_logo_md='', max_output_len=4000), RequestTool(name='forecast_weather', description='Forecast weather in the upcoming days.. Your input should be a json (args json schema): {{"location" : string, "days" : integer, }} The Action to trigger this API should be forecast_weather and the input parameters should be a json dict string. Pay attention to the type of parameters.', args_schema=None, return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x000001D15E393730>, func=<function RequestTool.__init__.<locals>.func at 0x000001D14CF0E550>, afunc=<function RequestTool.__init__.<locals>.afunc at 0x000001D1211E1790>, coroutine=None, tool_logo_md='', max_output_len=4000)]
[INFO|(BMTools)singletool:113]2023-07-11 17:33:42,960 >>  Full prompt template: Answer the following questions as best you can. General instructions are: Plugin for look up weather information. Specifically, you have access to the following APIs:

get_weather_today: Get today's the weather. Your input should be a json (args json schema): {{"location" : string, }} The Action to trigger this API should be get_weather_today and the input parameters should be a json dict string. Pay attention to the type of parameters.
forecast_weather: Forecast weather in the upcoming days.. Your input should be a json (args json schema): {{"location" : string, "days" : integer, }} The Action to trigger this API should be forecast_weather and the input parameters should be a json dict string. Pay attention to the type of parameters.

Use the following format:

Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [get_weather_today, forecast_weather]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question

Begin! Remember: (1) Follow the format, i.e,
Thought:
Action:
Action Input:
Observation:
Final Answer:
. The action you generate must be exact one of the given API names instead of a sentence or any other redundant text. The action input is one json format dict without any redundant text or bracket descriptions . (2) Provide as much as useful information (such as useful values/file paths in your observation) in your Final Answer. Do not describe the process you achieve the goal, but only provide the detailed answer or response to the task goal. (3) Do not make up anything. DO NOT generate observation content by yourself. (4) Read the observation carefully, and pay attention to the messages even if an error occurs. (5) Once you have enough information, please immediately use 
Thought: I have got enough information
Final Answer: 

Task: {input}
{agent_scratchpad}
weather {'schema_version': 'v1', 'name_for_human': 'Weather Info', 'name_for_model': 'Weather', 'description_for_human': 'Look up weather information', 'description_for_model': 'Plugin for look up weather information', 'auth': {'type': 'none'}, 'api': {'type': 'openapi', 'url': 'http://127.0.0.1:8079/tools/weather/openapi.json', 'is_user_authenticated': False}, 'author_github': None, 'logo_url': 'https://cdn.weatherapi.com/v4/images/weatherapi_logo.png', 'contact_email': '[email protected]', 'legal_info_url': '[email protected]'}


> Entering new AgentExecutorWithTranslation chain...
Thought: I need to get the weather information for San Francisco today.
Action: get_weather_today
Action Input: {"location": "San Francisco"}
Observation: "Today's weather report for San Francisco is:\noverall: Partly cloudy,\nname: San Francisco,\nregion: California,\ncountry: United States of America,\nlocaltime: 2023-07-11 2:33,\ntemperature: 11.1(C), 52.0(F),\npercipitation: 0.0(mm), 0.0(inch),\npressure: 1015.0(milibar),\nhumidity: 89,\ncloud: 25,\nbody temperature: 10.0(C), 50.0(F),\nwind speed: 15.5(kph), 9.6(mph),\nvisibility: 16.0(km), 9.0(miles),\nUV index: 1.0,\n"
Thought:
---------------------------------------------------------------------------
OutputParserException                     Traceback (most recent call last)
Cell In[9], line 8
      5 stqa =  STQuestionAnswerer()
      7 agent = stqa.load_tools(tool_name, tool_config, prompt_type="react-with-tool-description")
----> 8 agent("write a weather report for SF today")
     10 """ 
     11 from bmtools.agent.singletool import load_single_tools, STQuestionAnswerer
     12 
   (...)
     20 agent("Where is Yaoming Born?")
     21 """

File ~\anaconda3\envs\ToolBench\lib\site-packages\langchain\chains\base.py:116, in Chain.__call__(self, inputs, return_only_outputs)
    114 except (KeyboardInterrupt, Exception) as e:
    115     self.callback_manager.on_chain_error(e, verbose=self.verbose)
--> 116     raise e
    117 self.callback_manager.on_chain_end(outputs, verbose=self.verbose)
    118 return self.prep_outputs(inputs, outputs, return_only_outputs)

File ~\anaconda3\envs\ToolBench\lib\site-packages\langchain\chains\base.py:113, in Chain.__call__(self, inputs, return_only_outputs)
    107 self.callback_manager.on_chain_start(
    108     {"name": self.__class__.__name__},
    109     inputs,
    110     verbose=self.verbose,
    111 )
    112 try:
--> 113     outputs = self._call(inputs)
    114 except (KeyboardInterrupt, Exception) as e:
    115     self.callback_manager.on_chain_error(e, verbose=self.verbose)

File ~\anaconda3\envs\ToolBench\lib\site-packages\langchain\agents\agent.py:792, in AgentExecutor._call(self, inputs)
    790 # We now enter the agent loop (until it returns something).
    791 while self._should_continue(iterations, time_elapsed):
--> 792     next_step_output = self._take_next_step(
    793         name_to_tool_map, color_mapping, inputs, intermediate_steps
    794     )
    795     if isinstance(next_step_output, AgentFinish):
    796         return self._return(next_step_output, intermediate_steps)

File ~\anaconda3\envs\ToolBench\lib\site-packages\langchain\agents\agent.py:672, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps)
    667 """Take a single step in the thought-action-observation loop.
    668 
    669 Override this to take control of how the agent makes and acts on choices.
    670 """
    671 # Call the LLM to see what to do.
--> 672 output = self.agent.plan(intermediate_steps, **inputs)
    673 # If the tool chosen is the finishing tool, then we end and return.
    674 if isinstance(output, AgentFinish):

File ~\anaconda3\envs\ToolBench\lib\site-packages\langchain\agents\agent.py:385, in Agent.plan(self, intermediate_steps, **kwargs)
    383 full_inputs = self.get_full_inputs(intermediate_steps, **kwargs)
    384 full_output = self.llm_chain.predict(**full_inputs)
--> 385 return self.output_parser.parse(full_output)

File ~\anaconda3\envs\ToolBench\lib\site-packages\langchain\agents\mrkl\output_parser.py:24, in MRKLOutputParser.parse(self, text)
     22 match = re.search(regex, text, re.DOTALL)
     23 if not match:
---> 24     raise OutputParserException(f"Could not parse LLM output: `{text}`")
     25 action = match.group(1).strip()
     26 action_input = match.group(2)

OutputParserException: Could not parse LLM output: `I now know the final answer.`

I looked in the langchain code, and from the regex matching pattern in langchain/agents/mrkl/output_parser.py it seems like langchain was expecting an action/tool call but could not find one because the LLM returned "I now know the final answer" as its output. I don't think modifying the langchain prompt is the solution because I know langchain can work with the default prompt, as evident by the SFT data generation I was able to accomplish via the code you gave for ToolBench.

Additionally, I cannot get web_demo.py to run properly, and am getting the same error as #51

新安装的执行报这个错误

Use the following format:

Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [search_top3, load_page_index]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question

Begin! Remember: (1) Follow the format, i.e,
Thought:
Action:
Action Input:
Observation:
Final Answer:
(2) Provide as much as useful information in your Final Answer. (3) YOU MUST INCLUDE all relevant IMAGES in your Final Answer using format img, and include relevant links. (3) Do not make up anything, and if your Observation has no link, DO NOT hallucihate one. (4) If you have enough information, please use
Thought: I have got enough information
Final Answer:

Question: {input}
{agent_scratchpad}
Traceback (most recent call last):
File "/www/apps/tools/venv/lib/python3.10/site-packages/gradio/routes.py", line 399, in run_predict
output = await app.get_blocks().process_api(
File "/www/apps/tools/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1299, in process_api
result = await self.call_function(
File "/www/apps/tools/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1036, in call_function
prediction = await anyio.to_thread.run_sync(
File "/www/apps/tools/venv/lib/python3.10/site-packages/anyio/to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/www/apps/tools/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "/www/apps/tools/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 867, in run
result = context.run(func, *args)
File "/www/apps/tools/venv/lib/python3.10/site-packages/gradio/utils.py", line 488, in async_iteration
return next(iterator)
File "/www/apps/tools/web_demo.py", line 68, in answer_by_tools
for inter in agent_executor(question):
File "/www/apps/tools/bmtools/agent/executor.py", line 99, in call
self.callback_manager.on_chain_start(
AttributeError: 'NoneType' object has no attribute 'on_chain_start'

HTTP error

Thank you for the excellent work. I am trying the test.py.

from bmtools.agent.singletool import load_single_tools, STQuestionAnswerer

# Langchain
tool_name, tool_url = 'weather',  "http://127.0.0.1:8079/tools/weather/"
tool_name, tool_config = load_single_tools(tool_name, tool_url)
print(tool_name, tool_config)
stqa =  STQuestionAnswerer()

agent = stqa.load_tools(tool_name, tool_config, prompt_type="react-with-tool-description")
agent("write a weather report for SF today")

However, the connection error bothers me a lot.

requests.exceptions.ConnectionError: HTTPConnectionPool(host='127.0.0.1', port=8079): Max retries exceeded with url: /tools/weather/.well-known/ai-plugin.json (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f96cfbcd1b0>: Failed to establish a new connection: [Errno 111] Connection refused'))

I have checked the previous issues and still have no idea what I should do to solve it. Would you like to give some suggestions?

请问如何集成llama?

粗看代码,agent代理的openai,请问怎么集成llama呢?llama2源代码和llama几乎一样,是不是集成工作量也接近?非常感谢!

Have an issue with langchain

File "/code/web_demo.py", line 68, in answer_by_tools for inter in agent_executor(question): File "/code/bmtools/agent/executor.py", line 99, in __call__ self.callback_manager.on_chain_start( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'NoneType' object has no attribute 'on_chain_start'

I pip installed from the requirements file, so should have the right langchain version but can't find that on_chain_start module anywhere

Datasets

Hi!

Thanks for creating and maintaining this effort!

In the paper (https://arxiv.org/pdf/2304.08354.pdf) it says The codes and datasets are publicly
available for further research exploration
, but I didn't find any mention to the data in the repo.
Has the dataset been released?

Thanks a lot!

多次调用,自我意想不存在的插件,较高的报错率

我是用的gpt3.5的key,每次测试都是单插件,插件确保是正常的
1.每当执行命令时,BMTools总是执行两次或多次。例如音乐搜索插件,输入周杰伦,BMTools总会执行两次接口的请求。
image
image
image

2.当执行到一次成功的结果后,例如上面的例子已找到答案,但接下来BMTools就开始意想不存在的插件了,这将消耗过多的时间
image

3.在实际使用中,调用10次只有3次是成功的,且耗时在1.5分钟之上。
4.最终答案有时会受到插件介绍的影响,下面最终输出 明显收到了插件介绍的影响(或者插件不支持中文?)
image

我非常喜欢OpenBMB的工具,非常感谢你们的付出,也希望插件能够有更好的发展。
上面是我深度体验3天后,发现的问题,希望官方的大佬可以优化并修复,在此再次感谢你们的付出。

新增本地插件包

请问新增本地算法包为插件,是否只需要写个能运行python代码的插件就可以?

然后如何把算法包的使用方法扩展进llm的认知空间?

非常感谢!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.