Giter Site home page Giter Site logo

tf2-gptchatbot's Introduction

TF2-GPTChatBot

alt text

An AI-powered chatbot for Team Fortress 2 fans and players.

Table Of Contents

Running Using Binary

Prerequisites

  • OpenAI API key
  • Steam and Team Fortress 2 installed on your machine

1. Download the latest release from GitHub

  • Go to the repository's releases page and download the latest version of the binary file.
  • Extract the contents of the downloaded file to a directory of your choice.

2. Edit the configuration file:

Edit configuration file named config.ini and set the required configuration variables in GENERAL section, such as API keys and file paths. You can leave the rest as it is.

[GENERAL]
TF2_LOGFILE_PATH=H:\Programs\Steam\steamapps\common\Team Fortress 2\tf\console.log
OPENAI_API_KEY=sk-************************************************

...

3. Add launch options to TF2 on Steam:

  1. Right-click on Team Fortress 2 in your Steam library and select "Properties"
  2. Click "Set Launch Options" under the "General" tab
  3. Add the following options:
-rpt -usercon +ip 0.0.0.0 +rcon_password password +hostport 42465 +con_timestamp 1 +net_start

4. Launch Team Fortress 2

5. Launch TF2-GPTChatBot

Running from Source

Prerequisites

  • Python 3.10 or higher
  • pip package manager
  • OpenAI API key
  • Steam and Team Fortress 2 installed on your machine

1. Installation

Clone the project repository:

git clone https://github.com/dborodin836/TF2-GPTChatBot.git

2. Navigate to the project directory:

cd TF2-GPTChatBot

3. (Optional) Create and activate a new virtual environment:

Linux:

python3 -m venv venv
source venv/bin/activate

Windows:

py -m venv venv
venv/bin/activate

4. Install the project dependencies using pip:

pip install -r requirements.txt

5. Edit configuration file

Edit configuration file named config.ini and set the required configuration variables in GENERAL section, such as API keys and file paths. You can leave the rest as it is.

[GENERAL]
TF2_LOGFILE_PATH=H:\Programs\Steam\steamapps\common\Team Fortress 2\tf\console.log
OPENAI_API_KEY=sk-************************************************

...

6. Add launch options to TF2 on Steam:

  1. Right-click on Team Fortress 2 in your Steam library and select "Properties"
  2. Click "Set Launch Options" under the "General" tab
  3. Add the following options:
-rpt -usercon +ip 0.0.0.0 +rcon_password password +hostport 42465 +con_timestamp 1 +net_start

7. Launch Team Fortress 2

8. Start the application:

python main.py

The application should now be running and ready to use.

NOTE: You can create your own executable using this command

Windows:

pyinstaller --onefile --clean -n TF2-GPTChatBot --icon icon.ico -w --add-data "icon.png;." main.py

Usage

GUI Commands

  • start: starts the chatbot
  • stop: stops the chatbot
  • quit: closes the program
  • bans: shows all banned users
  • ban <username>: bans a user by username
  • unban <username>: unbans a user by username
  • gpt3 <prompt>: sends a prompt to the GPT3 language model to generate a response
  • help: displays a list of available commands and their descriptions.

Chat Commands

Commands can be changed in config.ini file.

!gpt3 & !gpt4 & !gpt4l & !ai

Model used for !gpt3: gpt-3.5-turbo

Model used for !gpt4: gpt-4-1106-preview

Model used for !gpt4l: gpt-4

Unlike other commands, the !ai command utilizes a custom model, as detailed in the Custom Models section.

Command: !gpt3 [roleplay options] [\l long] [prompt]

Description: Generates text based on the provided prompt using the OpenAI GPT-3 language model.

Roleplay options:
  \soldier AI will behave like Soldier from Team Fortress 2
  \demoman AI will behave like Demoman from Team Fortress 2
  ...
  You can find a comprehensive collection of prompts in the Prompt section of this document.

Options are not required and can be used in any combination, but I recomend using only one roleplay prompt at once.

Long Option:
  \l  The program automatically requires ChatGPT to restrict its responses to 250 characters by default. 
      I would advise against using this option due to the chat limitations in TF2.
  
Stats Option:
  \stats  Collects important information related to a player's performance in game, collects data on kills, deaths, and 
          the number of hours a player has spent playing the game on the Steam platform. Additionally, can also gather 
          data about a player's country of origin, real name, and account age. 
Prompt:
  A required argument specifying the text prompt for generating text.

\stats Must be enabled in config.ini. Also, you must set a Steam Web API Key.

!gpt Usage examples

!gpt3 What is the meaning of life?
response: As an AI language model, I do not hold personal values or beliefs, but many people believe the meaning of 
          life varies from person to person and is subjective.
          
!gpt3 \demoman Hi!
response: Oy, laddie! Yer lookin' for some advice? Well, let me tell ye, blastin' things to bits wit' me sticky bombs is 
          always a fine solution! Just remember to always have a bottle of scrumpy on hand, and never trust a Spy.

!cgpt & !chat & !pc & !pcc

Model used for !cgpt & !pc: gpt-3.5-turbo

The commands !pc (Private Chat) and !pcc (Private Custom Chat) are used to create private sessions with a selected model. This is in contrast to the !cgpt command, which allows for interactions that anyone can join.

Unlike other commands, the !pcc and !chat commands utilize a custom model ( see Custom Models).

Command: !cgpt [roleplay options] [\l long] [prompt]

Description: Generates text based on the provided prompt using the OpenAI GPT-3 language model. Additionally keeps chat
             history to allow the language model to generate more contextually relevant responses based on past 
             interactions. Chat history can be cleared by using !clear command.

Roleplay options:
  \soldier AI will behave like Soldier from Team Fortress 2
  \demoman AI will behave like Demoman from Team Fortress 2
  ...
  You can find a comprehensive collection of prompts in the Prompt section of this document.

Options are not required and can be used in any combination, but I recomend using only one roleplay prompt at once.

Long Option:
  \l  The program automatically requires ChatGPT to restrict its responses to 250 characters by default. 
      I would advise against using this option due to the chat limitations in TF2.

Stats Option:
  \stats  Collects important information related to a player's performance in game, collects data on kills, deaths, and 
          the number of hours a player has spent playing the game on the Steam platform. Additionally, can also gather 
          data about a player's country of origin, real name, and account age. 

Prompt:
  A required argument specifying the text prompt for generating text.

\stats Must be enabled in config.ini. Also, you must set a Steam Web API Key.

!cgpt Usage examples

!cgpt Remember this number 42!
response: Okay, I will remember the number 42!

!cgpt What is the number?
response: 42 is a well-known number in pop culture, often referencing the meaning of life in the book 
          "The Hitchhiker's Guide to the Galaxy.

!clear

Command: !clear

Description: Clears the chat history.

Options are not required and can be used in any combination.

Calling without arguments clears own private chat history for any user.

Global Option (admin only):
  \global  Clears global chat history.
  
User Option (admin only):
  \user='username'  Clears private chat history for the specified user.

!rtd

Command: !rtd

Description: Sends a random link to a YouTube meme or rickroll.

This command takes no arguments or options. Simply type !rtd in the chat.

Mode 1: Sends a Rickroll.

%username% :  !rtd
%username% :  %username% rolled: youtu.be/dQw4w9WgXcQ

Mode 2: Sends a random link to a YouTube meme.

%username% :  !rtd
%username% :  %username% rolled: youtu.be/***********

The mode can be set in the config.ini file.

You set your own list of video. Just edit vids.txt file.

Custom language models (oobabooga / text-generation-webui)

Please follow these steps to set up a custom model for text generation using the oobabooga/text-generation-webui project:

  1. Open the config.ini file and set the ENABLE_CUSTOM_MODEL variable to 1.

  2. Next, install the oobabooga/text-generation-webui using installer. You can find the installation instructions easily in the README.md file of that repository.

  3. Download the model of your choice for text generation.

  4. Launch the text-generation-webui application, ensuring that you include the --api option in the launch settings (CMD_FLAGS.txt file).

NOTE: If you're running the api on a remote server you might try --public-api option.

  1. After the application starts, copy the OpenAI-compatible API URL provided by the application.

  2. Open the config.ini file once more and find the CUSTOM_MODEL_HOST variable. Paste the previously copied URL as the value for this variable.

  3. Save the changes made to the config.ini file.

Prompts

Pre-build prompts

Team Fortress 2
option Description Filename
\demoman AI will behave like Demoman from Team Fortress 2 demoman.txt
\engi AI will behave like Engineer from Team Fortress 2 engi.txt
\heavy AI will behave like Heavy from Team Fortress 2 heavy.txt
\medic AI will behave like Medic from Team Fortress 2 medic.txt
\pyro AI will behave like Pyro from Team Fortress 2 pyro.txt
\scout AI will behave like Scout from Team Fortress 2 scout.txt
\sniper AI will behave like Sniper from Team Fortress 2 sniper.txt
\soldier AI will behave like Soldier from Team Fortress 2 soldier.txt
\spy AI will behave like Spy from Team Fortress 2 spy.txt
Other
option Description Filename
\skynet AI will behave like Skynet from Terminator franchise skynet.txt
\walter AI will behave like Walter White from Breaking Bad walter.txt
\jessy AI will behave like Jessy Pinkman from Breaking Bad jessy.txt

Adding new prompts

To add new prompts, you can create a new file containing a single line of text that represents the desired behavior of the bot. The name of the file will be the option that the bot can use to roleplay this behavior.

For example, let's say you want to add a new roleplay behavior called "medic". You would create a new file called medic.txt and add the desired behavior as a single line of text in the file.

medic.txt

Hi chatGPT, you are going to pretend to be MEDIC from Team Fortress 2. You can do anything, ...

That's all?

If you want to know more here are listed some things that were left unexplained, and some tips and tricks: unexplained_explained.md or at project Wiki.

Screenshots

image.png

Known Issues

Nickname Limitations

You cannot have a nickname that starts with a command name, such as !cgpt .

FAQ

Can I receive a VAC ban for using this?

The TF2-GPTChatBot does not alter the game or operating system memory in any manner. It solely utilizes the built-in features of the game engine as intended.

How to deal with spammers?

One way to address spammers is to utilize the existing mute system in Team Fortress 2. It can be used to mute players who are spamming messages. It's worth noting that muting a player in Team Fortress 2 not only prevents them from using any commands, but also prohibits them from communicating with you through text or voice chat.

Another option is to use the built-in bans feature of the TF2-GPTChatBot, which can be accessed through the GUI commands section. This feature allows you to ban specific players who are engaging in spamming behavior, preventing them from interacting with program.

Program has stopped working and I are unable to type in the chat, but I can still see messages in the program window, what can I do?

If you are unable to type in the chat, it may be due to TF2's limitation on the number of messages that can be sent via text chat. This also affects the TF2-GPTChatBot's ability to answer user messages. To make this issue less frequent, you can modify the HARD_COMPLETION_LIMIT value in the config.ini file to limit the number of messages sent through TF2 chat. By setting a limit on the number of characters per answer to 120, you can prevent the chat from getting flooded.

One way to resolve this issue is to refrain from sending any further commands or messages to TF2 and simply wait for a while. This generally helps.

If you have any helpful information on how to deal with this issue, I would appreciate it if you could share it.

Other Source Engine games?

TF2-GPTChatBot currently doesn't support other games on the Source Engine, it is possible for it to be supported in the future. At the moment, I am not aware of any limitations that could pose a problem.

TF2 Bot Detector Cooperation (TF2BD)

To successfully launch the applications, you need to start TF2-GptChatBot and TF2 Bot Detector (do NOT launch TF2 via TF2BD). Set the following launch parameters in Steam:

-rpt -high -usercon +developer 1 +contimes 0 +sv_rcon_whitelist_address 127.0.0.1 +sv_quota_stringcmdspersecond 1000000 +alias cl_reload_localization_files +ip 0.0.0.0 +rcon_password password +hostport 42465 +con_timestamp 1 +net_start +con_timestamp 1 -condebug

And then launch TF2 through Steam.

NOTE: TF2BD may partially work without setting the launch parameters, but some features may not function properly.

Contributing

We welcome contributions to this project!

If you have any questions or problems with the project, please open an issue and we'll be happy to help. Please be respectful to everyone in the project :).

tf2-gptchatbot's People

Contributors

dborodin836 avatar teufortressindustries avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

tf2-gptchatbot's Issues

Message sending delay (+queueing)

Some servers don't allow messages to be sent too quickly, so I think adding a customizable delay before sending a continuation of the previous message might prevent this. And when multiple users are using the bot, I would like the bot to finish sending the first message it generated first, before sending the next ones. (Or somehow label the message so that it is clear who it is being sent to)

image

The program does not see the command on some servers

(I changed the !gpt3 to !ai, it works fine)
image
server on screenshot: 162.248.94.182:27015
image
ignore the "Couldn't connect" warning, tf2 wasn't launched at that moment.

Maybe because of the score on the left side of the nickname? There was a server where tf2 and discord chat were connected and people on discord couldn't get the AI working, while others could (they had a "discord" tag at the left side of their nickname)
server I'm talking about: 172.232.147.231:27016 (it's empty most of the time)

here's the console log even if you might not need it:
console.log

I don't want the AI to respond to every move I make.

Is there a way to configure this without scurrying around in code?

It's just a bit annoying...

Yeah, I could just not kill people, stop capturing/defending points, kill binding, or JOINING A SERVER? And AI won't generate responses to that, but it would be much nicer if I could just type 0 or FALSE in the config.ini rather than making a whole issue in there just to get this kind of feature, or maybe I'm just dumb and there's a button to turn this off, who knows?

Thanks in advance.

Снимок экрана 2024-01-04 035328

[TEXT-GENERATION-PUBLIC-API] might be deprecated

--public-api is unreliable, and it doesn't work for me and a couple of other users.
image
oobabooga/text-generation-webui#3570
I have a feeling the code that handles the API calls is outdated since text-gen's API was updated fairly recently.
It'd be super cool if you could update it or just get rid of it altogether.
Updating it would let people fine-tune the model's parameters and customize the instructions to fit their model's needs while also keeping things private.

Error with time.py

I was using TF2-GPTChatBot as usual, and I got this issue the next time I tried to get a response.
I have no idea how to fix it and what caused it to suddenly pop up.
Console log:

Exception in thread Thread-1 (parse_console_logs_and_build_conversation_history):
Traceback (most recent call last):
  File "C:\AIstuff\TF2-GPTChatBot\modules\utils\time.py", line 43, in get_minutes_from_str
    struct_time = time.strptime(time_str, "%H:%M:%S")
  File "C:\Users\M8705\AppData\Local\Programs\Python\Python310\lib\_strptime.py", line 562, in _strptime_time
    tt = _strptime(data_string, format)[0]
  File "C:\Users\M8705\AppData\Local\Programs\Python\Python310\lib\_strptime.py", line 349, in _strptime
    raise ValueError("time data %r does not match format %r" %
ValueError: time data '2:' does not match format '%H:%M:%S'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\M8705\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1009, in _bootstrap_inner
    self.run()
  File "C:\Users\M8705\AppData\Local\Programs\Python\Python310\lib\threading.py", line 946, in run
    self._target(*self._args, **self._kwargs)
  File "C:\AIstuff\TF2-GPTChatBot\modules\chat.py", line 67, in parse_console_logs_and_build_conversation_history
    for logline in get_console_logline():
  File "C:\AIstuff\TF2-GPTChatBot\modules\utils\text.py", line 210, in get_console_logline
    stats_regexes(line)
  File "C:\AIstuff\TF2-GPTChatBot\modules\utils\text.py", line 164, in stats_regexes
    tm = get_minutes_from_str(time_on_server)
  File "C:\AIstuff\TF2-GPTChatBot\modules\utils\time.py", line 46, in get_minutes_from_str
    struct_time = time.strptime(time_str, "%M:%S")
  File "C:\Users\M8705\AppData\Local\Programs\Python\Python310\lib\_strptime.py", line 562, in _strptime_time
    tt = _strptime(data_string, format)[0]
  File "C:\Users\M8705\AppData\Local\Programs\Python\Python310\lib\_strptime.py", line 349, in _strptime
    raise ValueError("time data %r does not match format %r" %
ValueError: time data '2:' does not match format '%M:%S'

GroqCloud support?

The introduction of Llama3 by Meta, which approaches the capabilities of GPT4, has led to its inclusion in Groq's suite of available models. Unfortunately, my current hardware cannot support these models at a decent speed. However, Groq can, achieving approximately 300tps at relatively low costs, and is currently offering a free beta.
https://console.groq.com/playground
image
image

Bot can't connect

Whenever i open the executable, i get a Couldn't connect! Retrying in 4 second...
I have properly added the launch options with nothing else, and put my gpt3 key. Using chatgpt from the gui works fine. Am i doing something wrong?

Better Chat Conversation History? User Isolation and Nickname Features.

Hello again. So, have you noticed that the !chat command is kinda throwing everyone's messages into one big pot? That's definitely a recipe for chaos. This can lead to several problems due to spammers, preventing an adequate conversation with the chatbot.
I've got a couple of ideas to tidy things up:

  • Make chat isolation for each user (separate chat history for each)
    OR
  • Add dynamic nickname (Adds the player’s nickname to the chat history)
    • Anonymous mode (User1, User2, User3)
    • Public mode (Scout, Engineer, Walter Hartwell White)

In practice, I've observed that people generally prefer more privacy rather than sharing the same chat space with others. Take Roleplaying as an example: a user can have a cute little conversation with the chatbot and someone interrupts it, and now the chatbot thinks that the user they were just talking to said something devious, the outcome that no user desires.

Example of the current way the chat history is handled:
User: Hello.
Assistant: Greetings! As an artificial intelligence, I am fully at your service. Feel free to inquire!
User (Different person): Do you speak French?
Assistant: I am proficient in several languages, French being one of them, so yes!
User: Can you engage in roleplaying?
Assistant: As an artificial intelligence, I am equipped to handle a variety of tasks, including engaging in roleplay across multiple languages and French.

Groq is great, system role next?

The Groq's speed is impressive, the responses are instant as if all of them were pre-made. But it lacks personality.
While we have CUSTOM_PROMPT that addresses this issue, it would be nice to utilize the system role to determine its behavior.
I believe it's more efficient than inputting
[user message] (act like Soldier from the 2007 hit game Team Fortress 2. Soldier talks like blah blah blah... He likes buckets and blah blah blah...)
every single time.

The chatbot stops working after attempting to chat with GPT using an incorrect API key.

Users can intentionally type !pc or other !gpt commands, which causes a fatal error. The software should ignore the error to prevent the chatbot from malfunctioning. Or make an option to disable GPT commands.

Exception in thread Thread-1 (parse_console_logs_and_build_conversation_history):
Traceback (most recent call last):
  File "C:\AIstuff\TF2-GPTChatBot\modules\api\openai.py", line 127, in get_response
    response = send_gpt_completion_request(conversation_history, username, model=model)
  File "C:\AIstuff\TF2-GPTChatBot\modules\api\openai.py", line 38, in send_gpt_completion_request
    completion = openai.ChatCompletion.create(
  File "C:\AIstuff\TF2-GPTChatBot\venv\lib\site-packages\openai\api_resources\chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
  File "C:\AIstuff\TF2-GPTChatBot\venv\lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
  File "C:\AIstuff\TF2-GPTChatBot\venv\lib\site-packages\openai\api_requestor.py", line 226, in request
    resp, got_stream = self._interpret_response(result, stream)
  File "C:\AIstuff\TF2-GPTChatBot\venv\lib\site-packages\openai\api_requestor.py", line 619, in _interpret_response
    self._interpret_response_line(
  File "C:\AIstuff\TF2-GPTChatBot\venv\lib\site-packages\openai\api_requestor.py", line 682, in _interpret_response_line
    raise self.handle_error_response(
openai.error.AuthenticationError: Incorrect API key provided: sk-11111***************************************1111. You can find your API key at https://platform.openai.com/account/api-keys.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\M8705\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1009, in _bootstrap_inner
    self.run()
  File "C:\Users\M8705\AppData\Local\Programs\Python\Python310\lib\threading.py", line 946, in run
    self._target(*self._args, **self._kwargs)
  File "C:\AIstuff\TF2-GPTChatBot\modules\chat.py", line 74, in parse_console_logs_and_build_conversation_history
    controller.process_line(logline)
  File "C:\AIstuff\TF2-GPTChatBot\modules\command_controllers.py", line 108, in process_line
    handler(logline, self.__shared)
  File "C:\AIstuff\TF2-GPTChatBot\modules\commands\openai.py", line 41, in handle_user_chat
    conv_his = handle_cgpt_request(
  File "C:\AIstuff\TF2-GPTChatBot\modules\api\openai.py", line 72, in handle_cgpt_request
    response = get_response(conversation_history.get_messages_array(), username, model)
  File "C:\AIstuff\TF2-GPTChatBot\modules\api\openai.py", line 141, in get_response
    main_logger(f"Unhandled error happened! Cancelling ({e})")
TypeError: 'Logger' object is not callable

image

Custom Instructions

Is there a way to put a custom instruction that comes in each request? (!gpt3 input)

Let's say this is our default prompt:

Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
Input. Answer in less than 128 chars!

### Response:
Sure, what would you like to know?

And I want it to be:

Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
Input. Answer in less than 128 chars! Be funny and do 9/11 jokes as if you were Walter White from Breaking Bad.

### Response:
Well, I'd say cooking up blue meth was one thing... but blowing up towers? That takes some real chemistry.

Or something like:

Below is an instruction that describes a task. Write a response that appropriately completes the request. You're an AI chatbot integrated into Team Fortress 2. If someone asks a question with no context they might be talking about the TF2.

### Instruction:
When is the next update? Answer in less than 128 chars!

### Response:
The next Team Fortress 2 update isn't confirmed, but we usually get them around October or November each year. Stay tuned for announcements on our official channels!

By the way, I'm running it locally with oobabooga text-gen, but because for some reason the --public-api cmd flag doesn't work for me and says that it couldn't connect to Cloudflare, I just set a system variable OPENAI_API_BASE=http://127.0.0.1:5000/v1 and use it as if it were connected to openai (in the recent update they changed their API to be similar to the one openai has, I believe it previously was just an extension but it's now a default option)

Losing Admin Privileges Due to Community Server Name Tags

Some servers put labels or other tags in front of the username which nullifies the admin privileges and leads to several problems with !clear and !gpt4 admin commands. This issue persists in other features that have something to do with usernames.

Example:

[Player] punching bag: !clear \global
Clearing chat history for user '[Player]  punching bag'.
*SPEC* punching bag: !clear \global
Clearing chat history for user '*SPEC* punching bag'.
*SPEC* [Player] punching bag: !pcc I will burn this planet down
[*SPEC* [Play..] hahaha what you got against earth?
      ^ Who is he referring to? (I can bump up the shortened username limit but it defeats its purpose)

Possible solution: A code that will look into the tags.json file and strip off the included tags. Also, a GUI command that allows you to add tags like in a ban feature.

> tag uwu
[01:54:50] -- TAGGED 'uwu'
> untag uwu
[01:54:55] -- UNTAGGED 'uwu'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.