- Argument validation for complex types
- Automatic JSON Schema creation
- Utility methods for end-to-end tool call processing
@openai_function
def get_stock_price(ticker: str, currency: Literal["USD", "EUR"] = "USD"):
"""
Get the stock price of a company, by ticker symbol
Parameters
----------
ticker
The ticker symbol of the company
currency
The currency to use
"""
return f"182.41 {currency}, -0.48 (0.26%) today"
schema = get_stock_price.schema
json.dumps(schema, indent=4)
{
"type": "function",
"function": {
"name": "get_stock_price",
"description": "Get the stock price of a company, by ticker symbol",
"parameters": {
"type": "object",
"properties": {
"ticker": {
"type": "string",
"description": "The ticker symbol of the company"
},
"currency": {
"type": "string",
"description": "The currency to use",
"enum": [
"USD",
"EUR"
],
"default": "USD"
}
},
"required": [
"ticker"
]
}
}
}
An essential tool for every prompt engineer, templatify
is a string templating tool that brings Jinja2 into your python code.
- Easy to use: No need to manually configure an environment manually. But you still have full control over the environment if needed.
- Declarative: It's irrefutably clear that the string you've written is a template
- Dynamic Code Generation: The
@template
decorator will dynamically create a function whose signature is identical to the one you've written, and that passes arguments to your template. This achieves both runtime safety, (since your parameter names are already validated against all dependencies of the template) and static type safety, since your type checker will respect the function you've written, as-is.
@template
def greet_user(name: str, age: int = 10):
"Hello, {{ name|upper }}! You are {{ [1,2,3,4]|random }} years old."
When your code breaks, debuggpt
has a clearer picture of what happened than you do, before it even communicates with an LLM.
When placed over a function that fails, @debug_gpt
sends GPT-4 a comprehensive report on the state of your program at the moment an error occurs. The LLM will see an in-depth walkthrough of the call stack, with annotated blocks of source code, the types and values of objects at key moments, a history of your printed outputs, the original traceback, and more.
Greatly enhanced dictionaries, with a Pandas-like API, pretty representation, and algorithms for complex transformations and aggregations.
Lazytables make it easy to manage a large number of data sources in your code. @lazytables
makes your class act like a database, where your
instance is the namespace, and each attribute is a table.
With a SQL-like syntax for accessing data, you can freely access data sources at your leisure, with peace of mind that the data will only be read on-demand as you need it, and the same data will never be read twice.
Lazytables puts all the power and control in your hands. It has no authority over how data is read or written. In fact, it doesn't even know how your data is read or written.
An experimental new design pattern for working with data in a notebook environment.
ScopeSpace
is a context manager who's inner block has its own local scope. And when that block ends, the name you assigned to the context manager becomes a namespace, storing all new declarations made within the scoped block.
with ScopeSpace() as bar:
stuff = 10
print(stuff) # NameError: name 'stuff' is not defined
print(bar.stuff) # 10
x = 5
with ScopeSpace() as foo:
x = x + 1
print(x) # 5
print(foo.x) # 6
A front-end framework for Excel that can do magical things.
The problem: With traditional tools, scripting Excel is tedious for two reasons:
- Layout: You must refer to actual cell locations in your code, and tell them each what to do.
- Cell References: The most important feature of a spreadsheet - the ability to see how calculations were made - is not available to you when scripting.
With excelbird:
- Layout and styling is as easy as building an HTML page. You don't have to tell cells where to go.
- A dataframe library where all calculations are lazily evaluated as formulas and cell references at write time.
A Python web app for visualizing Colorado geographic data. Nearly 400 variables to choose from, including crime stats, census data, student demographics, viewable by county or by district.
Tech
- Web Framework: Plotly Dash for Python
- Logic and data structures: Geopandas dataframes, and pure Python
- Geocoding: Google API
- UI Components: Mostly Dash Bootstrap components with some Dash Core components, and a lot of custom styling.
A simple tool to enable an unconventional but sometimes useful coding style. In simple terms, it's a framework for modifying existing classes in an easy, organized, and readable way. See the below examples.
On the left is your code. On the right is how it's interpreted at runtime | ||
Elements under
@Extend
andclass _(Extend
get 'moved' toSomething
(they becomeNone
in global scope and set as attributes onSomething
).
A python package/API for building network flow optimization models in a notebook environment.
- The network's shape and constraint stack is dynamic in every aspect, making experimentation effortless; The user's code will follow the same structure regardless of how the model looks.
- Unlike other implementations, the user can easily constrain upper/lower bounds at any level of granularity (and any location) within the network, to support real-world, complex, edge-case situations, and make quick modification/experimentation easy.
- Set flow bounds on entire layers, nodes within each layer, or individual edges between nodes.
- Excellent cell outputs when displaying model features in a notebook (Custom display methods are implemented for all model features)
An extensive data wrangling, cleaning, and geocoding pipeline to prepare data for the geo-visualization web app highlighted above. Data is extracted from more than a dozen public data sources, cleaned/engineered for analysis and visualization, geocoded, and joined on custom keys. Resulting dataset has ~350 geocoded metrics for each county in Colorado over 8 consecutive years, and ~140 geocoded metrics for each school district.
Python library with pre-configured visualizations, functions building charts rapidly, and an api for exploring, managing, loading, and generating documentation for online tabular datasets.
A much better correlation matrix/heatmap. Marks are sized based on the strength of the correlation, and it offers advanced options such as masking marks below a threshold, excluding variables that correlate on average below a threshold, and, by default, masking duplicate correlations and self-on-self correlations.
# Make an 18x18 inch chart with pre-defined styling, circular marks, grid hidden,
# hiding correlations below 0.1, hiding self-on-self correlations (default),
# and hiding repeated/duplicate correlations from the right side (default)
ct.set_style(18)
ct.superheat(df.corr(), thresh_mask=0.10, grid=False, marker='o');
Simple python package to mimic the interface of Excel's Solver for linear optimization problems. Designed solely for ease of use, requires almost zero python experience to implement, but lacks the flexibility of more developer-oriented optimization tools like
tsopt
, a network flow optimization tool I built.Problems are solved in a single line of code: Call
solver()
, passing arguments in a similar format to how you would lay out the problem in Excel. See the example below...
Example:
Problem: Create a trail mix recipe with the minimum possible cost while meeting nutrition requirements. Each ingredient has a cost and a different arrangement of nutrients. The model is subject to a constraint on the minimum total nutrients of the combined ingredients, and a minimum quantity of each ingredient.
Excel solution: (objective value is under "Total" in row 8. Decision quantities are highlighted in row 5.)
Python solution with excel-solver
:
import solver
solver.solve(
problem_type = "min",
objective_function = [
4, 5, 3, 7, 6
],
constraints_left = [
[10, 20, 10, 30, 20],
[5, 7, 4, 9, 2],
[1, 4, 10, 2, 1],
[500, 450, 160, 300, 500],
],
constraints_right = [
16,
10,
15,
600,
],
constraints_signs = [
">=",
">=",
">=",
">=",
],
minimum_for_all=0.1, # replaces lines 15-19 in the excel image above
)
Output:
------------------------------------------------------
MINIMIZE: z = 4a + 5b + 3c + 7d + 6e
------------------------------------------------------
OPTIMAL VALUE: 8.04
------------------------------------------------------
QUANTITIES:
a: 0.44415
b: 0.18091
c: 1.35322
d: 0.1
e: 0.1
------------------------------------------------------
Optimization terminated successfully. (HiGHS Status 7: Optimal)
Uses data from the Karve Analytics data warehouse. This is the final part of the Karve project.
(Unmute for narration)
Report.mov
A .NET app to interact with the Karve OLTP sample database to manage the fictitious business.
(Unmute for narration)
Karve.Demo.mp4
A Python script to simulate real business patterns and distributions of customer data to populate the Karve database with sample data. The result is a sample SQL Server database that students can use to practice analytical tasks such as queries or visualizations to discover hidden patterns and trends in the data.
For example...
- Rental order volume and return statistics are distributed bimodally, peaking near christmas and spring break.
- Rental operations are valid, such that a ski won't be in the hands of more than one customer at a time, won't be used after it has been damaged critically, will be rented less frequently over consecutive seasons, and always gets returned on time at the end of the season.
- The rate at which skis get damaged, the number of damage records per order, and the frequency of different types of ski damage are distributed based on time of season, the type of rider, and the type of ski.
- All customers are treated as real people. Thus, their key identifiers (name, gender, email, home address) line up with each other, and their body type and rider metrics (height, weight, boot size, ability) are aligned with each other. Those metrics also follow the distributions of real people.
Infographic | App Prototype |
---|---|
Brand Guide | Logo Design |
Randomly generates a unique maze and solves it. Uses pure Java, with custom data structures. As soon as the maze is solved, the path corrects itself to remove dead ends and reveal the shortest path.
Maze.mp4
A Vim plugin for commenting/uncommenting lines of text, with some additional features. I prefer this over commentary because it doesn't use motions. It instead works like a normal IDE, where a single key mapping toggles one or more lines of text.
File headers
- A feature to automatically create headers (your name & today's date) at the top of any new document you create, or any empty document that you open, and comments out that header using correct syntax based on your filetype.
- Lets you toggle the current file's header on/off with a single keypress, without disrupting your code, and without moving your cursor from its relative position.
- As soon as you write/save a file that HAS been modified, the header (if one exists) will be updated with the current date.
- The header format is customizeable, including the format of the current date. If you change the date format in your vimrc, the old dates in your previous files will automatically update with the new format once you save/write to them again.
A vim plugin to run Python code inside Vim, without using a terminal. With a single keypress, the output from your current Python file will be placed in a new buffer (window) at the bottom of your editor.
- Intelligent Environment Finding: When you press your key binding, it searches your current directory, and several parent directories, to find a python virtual environment named
env
. If none are found, then your machine's global python kernel will be used. - Current script gets executed silently, and its output is placed into a new Vim buffer at the bottom of your window.
- This new buffer is dynamically sized, so it's only tall enough to fit the output of your script. Its height is updated every time you execute.
Amazing documentation like this is hard to come by. Often we don't have time to create a dedicated website for our documentation, and must rely on Github. There are plenty of sweet features you can take advantage of in your readme pages with very little effort. For example, did you know you can fold text like this:
CLICK ME!
- I am an inside of an html
details
element. See the tutorial above for how to use me!
print("I'm colored with python syntax highlighting, AND I'm encased inside a text folding element :)")