Comments (6)
I'm not sure if it's possible to treat an expression as "star-args".
Some "workarounds" in case they are of any use:
It is possible to use a struct, but it changes what the function receives:
df = pl.DataFrame({
"B_1": [1, 2, 3, 4],
"B_2": [5, 6, 7, 8],
"B_3": [9, 10, 11, 12]
}).with_row_index()
df.with_columns(
pl.map_batches(
pl.struct("^B_.*$"),
lambda x: x[0].struct[0] + x[0].struct[1] + x[0].struct[2]
).name.prefix("C_")
)
# shape: (4, 5)
# ┌───────┬─────┬─────┬─────┬───────┐
# │ index ┆ B_1 ┆ B_2 ┆ B_3 ┆ C_B_1 │
# │ --- ┆ --- ┆ --- ┆ --- ┆ --- │
# │ u32 ┆ i64 ┆ i64 ┆ i64 ┆ i64 │
# ╞═══════╪═════╪═════╪═════╪═══════╡
# │ 0 ┆ 1 ┆ 5 ┆ 9 ┆ 15 │
# │ 1 ┆ 2 ┆ 6 ┆ 10 ┆ 18 │
# │ 2 ┆ 3 ┆ 7 ┆ 11 ┆ 21 │
# │ 3 ┆ 4 ┆ 8 ┆ 12 ┆ 24 │
# └───────┴─────┴─────┴─────┴───────┘
Create a Series from the columns:
df.with_columns(
pl.map_batches(
list(pl.Series(df.columns).str.extract('^(B_.*)$').drop_nulls()),
lambda x: x[0] + x[1] + x[2]
)
.name.prefix("C_")
)
Which I think would also be equivalent to expanding a selector:
import polars.selectors as cs
df.with_columns(
pl.map_batches(
cs.expand_selector(df, cs.matches("^B_.*")),
lambda x: x[0] + x[1] + x[2]
)
.name.prefix("C_")
)
from polars.
In fact, my scenario is that the column names after to_dummies
are dynamic, but the ols
function wants to remain simple.
The last two methods you provided need to be additionally passed the df
import polars as pl
import polars.selectors as cs
df = pl.DataFrame({
"A": [9, 10, 11, 12],
"B": [1, 2, 3, 4],
}).with_row_index()
df = df.with_columns(df.to_dummies('B'))
def func(yx):
return yx[0] + yx[1] + yx[2]
def lstsq_1(y, *x):
return pl.map_batches([y, *x], lambda yx: func(yx))
def lstsq_2(y, *x):
# it use the `df`, not good
z = cs.expand_selector(df, x[0])
return pl.map_batches([y, *z], lambda yx: func(yx))
out = df.with_columns([
# polars.exceptions.ComputeError: the name: 'resid' passed to `LazyFrame.with_columns` is duplicate
# lstsq_1(pl.col('A'), cs.matches('^B_.*$')).alias('resid'),
# not good
lstsq_2(pl.col('A'), cs.matches("^B_.*$")).alias('resid'),
])
print(out)
"""
shape: (4, 8)
┌───────┬─────┬─────┬─────┬─────┬─────┬─────┬───────┐
│ index ┆ A ┆ B ┆ B_1 ┆ B_2 ┆ B_3 ┆ B_4 ┆ resid │
│ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- │
│ u32 ┆ i64 ┆ i64 ┆ u8 ┆ u8 ┆ u8 ┆ u8 ┆ i64 │
╞═══════╪═════╪═════╪═════╪═════╪═════╪═════╪═══════╡
│ 0 ┆ 9 ┆ 1 ┆ 1 ┆ 0 ┆ 0 ┆ 0 ┆ 10 │
│ 1 ┆ 10 ┆ 2 ┆ 0 ┆ 1 ┆ 0 ┆ 0 ┆ 11 │
│ 2 ┆ 11 ┆ 3 ┆ 0 ┆ 0 ┆ 1 ┆ 0 ┆ 11 │
│ 3 ┆ 12 ┆ 4 ┆ 0 ┆ 0 ┆ 0 ┆ 1 ┆ 12 │
└───────┴─────┴─────┴─────┴─────┴─────┴─────┴───────┘
"""
from polars.
from functools import reduce
import polars as pl
import polars.selectors as cs
df = pl.DataFrame({
"A": [9, 10, 11, 12],
"B": [1, 2, 3, 4],
}).with_row_index()
df = df.with_columns(df.to_dummies('B'))
def func_2(yx):
y = yx[0]
x = yx[1:]
return y - reduce(lambda a, b: a + b, x)
def func_1(yx):
y = yx[0].struct[0]
x = list(yx[0].struct)[1:]
return y - reduce(lambda a, b: a + b, x)
def lstsq_1(y, *x):
return pl.map_batches(pl.struct([y, *x]), lambda yx: func_1(yx))
def lstsq_2(y, *x):
# it use the `df`, not good
z = cs.expand_selector(df, x[0])
return pl.map_batches([y, *z], lambda yx: func_2(yx))
out = df.with_columns([
lstsq_1(pl.col('A'), pl.col('^B_.*$')).name.prefix('C_'),
lstsq_2(pl.col('A'), cs.matches("^B_.*$")).name.prefix('D_'),
])
print(out)
"""
shape: (4, 9)
┌───────┬─────┬─────┬─────┬───┬─────┬─────┬─────┬─────┐
│ index ┆ A ┆ B ┆ B_1 ┆ … ┆ B_3 ┆ B_4 ┆ C_A ┆ D_A │
│ --- ┆ --- ┆ --- ┆ --- ┆ ┆ --- ┆ --- ┆ --- ┆ --- │
│ u32 ┆ i64 ┆ i64 ┆ u8 ┆ ┆ u8 ┆ u8 ┆ i64 ┆ i64 │
╞═══════╪═════╪═════╪═════╪═══╪═════╪═════╪═════╪═════╡
│ 0 ┆ 9 ┆ 1 ┆ 1 ┆ … ┆ 0 ┆ 0 ┆ 8 ┆ 8 │
│ 1 ┆ 10 ┆ 2 ┆ 0 ┆ … ┆ 0 ┆ 0 ┆ 9 ┆ 9 │
│ 2 ┆ 11 ┆ 3 ┆ 0 ┆ … ┆ 1 ┆ 0 ┆ 10 ┆ 10 │
│ 3 ┆ 12 ┆ 4 ┆ 0 ┆ … ┆ 0 ┆ 1 ┆ 11 ┆ 11 │
└───────┴─────┴─────┴─────┴───┴─────┴─────┴─────┴─────┘
"""
from polars.
Yeah, as far as I am aware it is only possible "at the expression level" using a struct because Polars converts multi-selectors into individual column selectors.
i.e.
df.with_columns(
pl.col("^B_.*$").map_batches(...)
)
is turned into:
df.with_columns(
pl.col("B_1").map_batches(...),
pl.col("B_2").map_batches(...),
pl.col("B_3").map_batches(...)
)
Otherwise you need to query the frame's schema in order to get the list of column names.
from polars.
If you're chaining a bunch of methods and want the df.columns
of an intermediate step you can do a pipe lambda like this
(
df
.with_columns(a=pl.col('b')*2)
.pipe(lambda df: (
df.with_columns(pl.col(x) for x in df.columns))
))
from polars.
The issue seems to be that you can pass a sequence of expressions to pl.map_batches
and it passes them to the function all together. (similar to as if you used a struct)
pl.map_batches([pl.col("B_1"), pl.col("B_2"), pl.col("B_3")], ...
It seems to use this map_mul
which I hadn't seen before:
polars/py-polars/polars/functions/lazy.py
Lines 910 to 912 in 6a181f2
But they want to be able to do this by specifying a single pl.col(regex)
instead.
from polars.
Related Issues (20)
- Ergonomic improvements to `struct.with_fields` HOT 3
- Support converting to NumPy masked arrays
- `write_parquet` on chunked data is pathological
- LazyFrame() not omitting hive partition columns
- Panic when trying to use List(Categorical) set_intersection with concat_list of other column with nulls or empty frame HOT 2
- read_excel with engine="calamine" infer_schema_length=0 returns an empty DataFrame HOT 1
- `struct.field("*")` duplicate column ComputeError
- `from_repr` generates DecprecationWarning about `apply` when Duration type is present
- In `expr.str.slice()` indicate whether an index of 0 or 1 means "start at the start of the string"
- Add argument to `df.to_dicts()` and `df.to_dict()` - `maintain_column_order: bool` HOT 3
- Support zero copy for Datetime/Duration types in `DataFrame.to_numpy`
- Reading parquet with PyArrow ignores rechunk argument HOT 1
- Add `pl.col(...).is_not_in(<iterable>)` method HOT 4
- `search_sorted` in an order of magnitude slower when single element chunk vstacked to the original dataframe HOT 2
- Rust to_ndarray does not cast Null in f64 column to NaN HOT 1
- .hash() return Int64 instead of UInt64 HOT 2
- Add argument to `Series.value_counts` to set the name of the new column created HOT 5
- Copy logic-plan from one LazyFrame to another LazyFrame? HOT 7
- Support converting DataFrames with matching Array types to multidimensional NumPy array
- ColumnNotFoundError appears in lazy mode only in version 0.20.28 HOT 9
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from polars.