Giter Site home page Giter Site logo

courses's Introduction

Anthropic courses

Welcome to Anthropic's educational courses. This repository currently contains four courses. We suggest completing the courses in the following order:

  1. Anthropic API fundamentals course - teaches the essentials of working with the Claude SDK: getting an API key, working with model parameters, writing multimodal prompts, streaming responses, etc.
  2. Prompt engineering interactive tutorial - a comprehensive step-by-step guide to key prompting techniques
  1. Real world prompting course - learn how to incorporate prompting techniques into complex, real world prompts
  2. Tool use course - teaches everything you need to know to implement tool use successfully in your workflows with Claude.

courses's People

Contributors

abeusher avatar alexalbertt avatar colt avatar elie avatar l1n avatar maggie-vo avatar polkerty avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

courses's Issues

ToolUse : How to use tools to scalably chunk output beyond output token limit?

I am writing a module to utilize AI techniques to digest large quantities of information and in a layered stepwise fashion distill it cumulatively into an intermediate structured form that can be used as an input to create a written human rights application.

I started with OpenAI GPT4o but hit issues with their max 4,000 token completion token cap.
I then moved to the OpenAI equivalent of the "tools" feature in an effort to have the AI generate the output in "chunks" that, collected together, represent the final output, which may be very long and exceed a model's token completion cap.

I then moved to adapt the script to Anthropic (which is almost complete).

It appears the API wants previous content to be sent back to it in order to complete the "conversation" but this may not scale for me. To summarize two months of evidence, and provide appropriate system content, I am already using 150,000 tokens out of the 200,000 token max context length. I will try to make this work by sending back the most recent tool_use clause that matches the tool_result response and see if the content flows properly into my tool (which saves chunked content to be reassembled later).

Initially, I have the AI in a first pass take detailed content from particular files containing emails and text messages from a particular date range, the output is a summarized structured format. I use the majority of the context window size (system prompt contains legal and medical background documentation, user prompt contains email/text message data) and the output is relatively small.

In pass 2 I take all those pass1 output blocks and cumulatively combine them, meaning that both input and output sizes are expected to grow to be quite large. More accurately, I do token counting and fit as many input blocks as I can while factoring in the expected output token size into the calculation. While Claude's 200,000 context window is certainly better than gpt4o's 128,000 window, I still need a scalable solution to summrize vast quantities of data and not hit any output token limits.

I should have asked : Does Claude-3.5 Sonnet have any restrictions on output/completion max token count?

Is the tools feature the most appropriate feature to use in this case? If so, how can I avoid sending in past context so huge that it quickly consumed the context window?

https://community.openai.com/t/inception-based-design-for-the-ai-assisted-creation-of-a-written-human-rights-complaint/863669

PS. I'm getting the following error because I'm omitting to send back previous context when responding with tool call results:

Error code: 400 - {'type': 'error', 'error': {'type': 'invalid_request_error', 'message': 'messages.0: tool_result block(s) provided when previous message does not contain any tool_use blocks'}}

The "tool_use" block contains the most recent chunked output.

Let's say that the final output contains ten chunks, does Claude expect me to give only the most recent chunk sent via a tool_use block or all previous tool_use and tool_result blocks? This sequence is not at all clear to me. Requirement is to provide system/user prompts and have output chunked back to us such that the joined chunks form parts of a single whole response (so the model must somehow maintain state through the initial call followed by the call containing the tool result content.

Also, when providing a tool result, do I also need to send back the initial user and system prompts? If so, the scalability of this approach may be in jeopardy.

Wrong output on 10.2 - Prompt Engineering Interactive Tutorial

image

This doesn't work properly, tried a few times (with temp = 0) and it always returned:

<function_calls>
<invoke name="calculator">
<parameter name="first_operand">5</parameter>
<parameter name="second_operand">3</parameter>
<parameter name="operator">+</parameter>
</invoke>

Use %pip install instead of !pip install in the notebooks

I was following this notebook: https://github.com/anthropics/courses/blob/cf2979dc88626f15c760ce83bc9e9e21015fcac5/prompt_engineering_interactive_tutorial/Anthropic%201P/00_Tutorial_How-To.ipynb

I cloned the repo and started the notebook like this, which was delightfully fast:

uv run jupyter-notebook .

But I got an error trying to import anthropic. It turns out this was because I was running Jupyter from a different environment, so this line:

!pip install anthropic

Installed the package into my default system environment, not the one for the notebook.

The fix is to use this instead:

%pip install anthropic

I just can't get multi turn tool use to work

In multi turn mode API keeps hallucinating a tool use as a standard reply

image

It is unclear how to encode tool use and result... I am doing this now...

image

A simple payload or API example that illustrates this (just a messages block with a bunch of history in text form) would be enormously helpful.

Real World Prompting Feedback

I recently converted Real World Prompting tutorial to TypeScript to make it more accessible (and for my personal learning and enjoyment). Along the way, I made a few adjustments to the content, so I just wanted to share these changes with the course creator.

1. API Parameters

  • The tutorial is still referencing the old sonnet model (claude-3-sonnet-20240229).
  • The temperature param is not set, even for tasks that could benefit for it (e.g. summarization).

2. Prompt Adjustments

As a result of adjusting the params in 1), I'm seeing way fewer hallucinations and out-of-character behavior (e.g. an assistant referencing <context>), but I noticed some areas where performance declined. For example:

  • In chapter 4, I had to explicitly instruct Claude to output the <json> tag after "Generate a JSON object with the following structure". Without that, Sonnet 3.5 would skip wrapping its response in tags (which might be preferable in most cases).
  • Also in chapter 4, when dealing with incomprehensible (e.g. "blah blah blah") and empty transcripts, Sonnet 3.5 would fail to output a JSON object. Instead, it responded with a long explanation asking for a valid transcript. I solved this by adding two additional examples to the prompt.

If you'd like more details on these changes, please let me know. For reference, here's the project I've been working on.

Lastly, I appreciate the effort put into creating this course and I'm looking forward to future updates!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.