Comments (28)
Thanks for the feedback @danicat! You're totally right -- this is something we definitely need to do and are planning to add. The system definitely supports this use case (in fact online / field experimentation is where this whole effort started)! There's some background of how Ax is used for product optimization via online evaluation here.
Just to clarify, you can think of Ax broadly as the management & algorithm layer on top of a system that actually does the randomization or routing of users to A or B (such as PlanOut), so Ax doesn't actually provide the randomization / routing logic itself. We are planning to provide an integration / comprehensive examples of using Ax with PlanOut (https://github.com/facebook/planout). Would that help with providing clearer examples?
There's a bit of background on the design of Ax (formerly AE) for field experimentation in Section 2 in this workshop paper.
If there are other experimentation systems beyond PlanOut that you think it would be valuable to consider integrations with, I'd be really curious to hear.
from ax.
@vmysla, you were exactly right to expect to be able to obtain a trial from the Service API and log back its results without creating an explicit evaluation function. The evaluation function in the tutorial is just there for the example –– as you might notice, it's never actually passed to AxClient
, it's just used to generate the trial results before completing them: ax.complete_trial(trial_index=trial_index, raw_data=evaluate(parameters))
. So instead of doing raw_data=evaluate(parameters)
, you can obtain the trial evaluation data in whichever way is appropriate for your setup. Does that help / make sense?
from ax.
@irvifa This should be doable without any additional work built into Ax, but we would definitely need to provide you with an example! I'll discuss with the team and try to prioritize getting you something in the next two weeks (so by end of January).
from ax.
Hi all, I have a draft of the PlanOut Tutorial available here: planout_tutorial.ipynb.zip.
This will be checked in and released on the site with the next version of Ax (in a week or two).
Please let me know if this answers your questions :)
from ax.
Have you taken a look at this tutorial? https://ax.dev/tutorials/building_blocks.html
This is the one most relevant to you, I think. The parts that you'll want to pay attention to are defining your custom Runner and Metric.
What data needs to be
persisted to the db so that Ax knows what happened to previous users,etc.
You can save data to the database however you want! Then, you'll want to write a custom Metric class that defines how to fetch that data, and converts it into a dataframe that Ax can parse. See an example here: https://ax.dev/tutorials/building_blocks.html#4.-Define-an-optimization-config-with-custom-metrics. In your case, your metric should load from your database and then convert the data to this format (one row for each arm).
Similarly, your custom Runner is what will actually launch the experiment into production. E.g. you could write some values to a database that Flask can read from in order to decide what arms it needs to show.
Let me know if this helps at all!
from ax.
Hi @kkashin! Yes, I think providing examples with Ax working in combination with PlanOut would be very interesting to see, as in the current state I still find a bit hard to understand where Ax would fit in the big picture (architecture-wise).
I'll have a look at the reference materials you've listed. Thank you!
from ax.
@lena-kashtelyan, thank you! Yes, this is very helpful. Will give it a shot
from ax.
Hi @sandys! Short answer: No, Ax and PlanOut would always work together, and Ax will not subsume PlanOut.
You can think of Ax as a tool which manages experiment metadata and designs new experiments. Ax does not actually implement an experimentation framework--something which handles, at a minimum, randomized user assignment. As a result, Ax works together with experimentation frameworks, via the "Runner" abstraction.
PlanOut is an experimentation framework, and it provides a way to implement the experiments designed by Ax. To interact with PlanOut, Ax would use a specific "PlanOut Runner", to create PlanOut scripts which handle the assignment suggested by Ax.
I am currently working on this runner, as well as a tutorial for using it.
from ax.
from ax.
Hi @sandys !
@sdsingh , can you maybe re-upload the planout tutorial? It's not clear to me how to find it.
@sandys maybe once you see that, it will at least unblock you?
To clarify, Ax is really agnostic to the backend you use. You can deploy your online trials however you want. The 'Runner' abstraction in Ax is what controls the deployment, and that's entirely customizable. Is there anything in particular you're confused about or that I could help clarify?
from ax.
Great - the open source PlanOut integration will likely take a couple of months for us to put together, but in the meantime, we'll get better documentation up in the next 1-2 weeks that explains how Ax can be used for A/B testing (including how it fits in, architecture-wise). The paper also explains this, so let me know if you have any questions as you look at those materials.
from ax.
Great - the open source PlanOut integration will likely take a couple of months for us to put together, but in the meantime, we'll get better documentation up in the next 1-2 weeks that explains how Ax can be used for A/B testing (including how it fits in, architecture-wise). The paper also explains this, so let me know if you have any questions as you look at those materials.
@kkashin any update / ETA on the open source Ax-Planout integration? looking forward to it .. thanks!
from ax.
Does this explain why all examples from ax.dev tutorials do have evaluation function?
I was under the impression that Ax can be used with its Service API to request for hands allocation and report back to the service after user converted.
We were planning to use Ax in our pricing experiments, but as far as I can understand, it looks like I already should have evaluation function that just returns past outcomes for any given hand allocations.
Am I understanding this correctly?
from ax.
@vmysla, may I ask what you mean when you say that all examples in our tutorials do have evaluation function? For example, through the Service API, you can request trials (which contain arms, aka points, which have parameterizations), then evaluate their parameterizations however you need to, then log the results back to Ax by 'completing' the trials. Does that functionality fit your purpose?
it looks like I already should have evaluation function that just returns past outcomes for any given hand allocations.
Not entirely sure what you meant there –– could you elaborate? A code snippet from one of our tutorials that shows what you are referring to by 'evaluation function that just returns past outcomes' would really help me understand and hopefully give an informative answer : )
from ax.
@lena-kashtelyan, thanks for help me to figure this out.
I'm looking at this example of the Service API usage:
https://ax.dev/tutorials/gpei_hartmann_service.html#3.-Define-how-to-evaluate-trials
Step #3 has an evaluation function that has contains a formula:
import numpy as np
def evaluate(parameters):
x = np.array([parameters.get(f"x{i+1}") for i in range(6)])
# In our case, standard error is 0, since we are computing a synthetic function.
return {"hartmann6": (hartmann6(x), 0.0), "l2norm": (np.sqrt((x ** 2).sum()), 0.0)}
another example:
evaluation_function=lambda p: (p["x1"] + 2*p["x2"] - 7)**2 + (2*p["x1"] + p["x2"] - 5)**2,
When I expected that instead of having predefined function, I will call Service API and let it know about the outcome of the given trial once I know if user converted or not (1/0).
from ax.
Any exact timeline on this? @kkashin
from ax.
@irvifa, @abronner - no updates on the timeline for Ax-PlanOut integration yet unfortunately. The team is focused on making improvements to existing functionality (including better benchmarking suite, improvements to ServiceAPI, and adding cost-aware algorithms). I'll try to have an update for you on timing within a couple of weeks.
from ax.
Hi @lena-kashtelyan ! We tried experimenting with the Service API approach for the online case but received some unexpected results. The experiment is very simple: a customer comes to our site, using the get_next_trial() method, we pick one of the 3 values for some parameter ('option1', 'option2', 'option3') and save it together with the trial_id per customer. Then with a 60% chance we convert customers with option1, 30% - with option2 and 10% - with option3. By convert I mean sending the complete_trial() request with trial_id corresponding to the customer and value of 1. We repeat this process for several iterations with 50 customers in each iteration.
We expected the distribution of hands to change from 33% to something that correlates with the conversion rate over time (iteration to iteration). But it keeps the same all the time (33%). Should we somehow manually tell the AxClient to reevaluate the hands at some point?
from ax.
@yuriy-muzychuk, hi! This seems like a rather separate issue from the one discussed here (examples for online evaluation). Would you mind opening it separately and adding a code snippet showing what exactly you are doing? Thanks!
from ax.
@kkashin It's been a while since then. I understand that, however still curious whether it can be done without any additional layer in the Ax (basically you guys only provide an example for this) or there should be another additional layer of abstraction in which we can define the input from randomization tools that we want?
from ax.
hi guys,
we are extremely interested in the planout vs ax question here. Will Ax subsume much of the functionality in Planout ?
We are really looking to adopt one or the other.
from ax.
@ldworkin and @sdsingh Thank you for your answer. I believe it’s already in your roadmap since the last time I talk to Lili about the improvement through the email. Looking forward for the tutorial. 🙂
from ax.
hi guys,
would it be possible to have an example of an online trial without using planout. Maybe just on top of Flask for simplicity.
I think a lot of people have similar questions - based on the activity in this bug over last 18 months.
from ax.
from ax.
It definitely is possible! Can you tell us a little more about your setup? How are you planning to randomize users among different options?
As per @sdsingh 's comment here:
You can think of Ax as a tool which manages experiment metadata and designs new experiments. Ax does not actually implement an experimentation framework--something which handles, at a minimum, randomized user assignment. As a result, Ax works together with experimentation frameworks, via the "Runner" abstraction.
If you tell us more about how you're going to handle the experimentation framework part, I can help you understand how Ax would work together with it.
from ax.
from ax.
These docs might also be helpful:
https://ax.dev/docs/trial-evaluation.html#adding-your-own-runner
https://ax.dev/docs/data.html
from ax.
With docs @ldworkin added above we'll consider this answered for now, but feel free to reopen!
from ax.
Related Issues (20)
- Last trial not being recognized (in the submitit tutorial) HOT 3
- Ax pulling numpy 2.0 with breaking changes HOT 3
- How to set the beta coefficient of generation strategy? HOT 6
- Numpy 2.0 Compatibility Issue HOT 1
- save the state HOT 9
- Defining a Model class for ModelListGP HOT 2
- Error : Try again with more data HOT 26
- Using nonlinear constraints with boolean masks HOT 5
- input data is not standardized (mean = tensor([0.] HOT 9
- nonlinear constarined in evaluate paramter instead add to the experiment HOT 2
- Ax 0.4.0 Causing Segmentation Fault When Calling `.get_next_trial()` HOT 2
- Defining Metric in Ax Service HOT 2
- Hierarchical Search Spaces with Multiple Independent Search Spaces HOT 3
- Services API :×1-1.5*×2>=0 HOT 4
- primary_objective and secondary_objective HOT 1
- Comparison of multi-objective acquistion functions HOT 14
- plot_pareto_frontier HOT 6
- attach trials HOT 8
- Multi-task BO with Service API HOT 2
- constrains HOT 17
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from ax.