microsoft / ailab Goto Github PK
View Code? Open in Web Editor NEWExperience, Learn and Code the latest breakthrough innovations with Microsoft AI
Home Page: https://www.ailab.microsoft.com/experiments/
License: MIT License
Experience, Learn and Code the latest breakthrough innovations with Microsoft AI
Home Page: https://www.ailab.microsoft.com/experiments/
License: MIT License
Hi Guys, I am getting an error when I run /BackgroundMatting/run.ps1. It says the module on line 9 of bg_matting.py can't be found. Anyone else getting this error?
(base) PS C:\virtualstage\BackgroundMatting> ./run
Traceback (most recent call last):
File ".\bg_matting.py", line 9, in
from proportional_threshold import proportional_split, proportional_merge
ModuleNotFoundError: No module named 'proportional_threshold'
This issue occurred when try to send a POST request to the Azure Function detection API.
I believe the root cause is due to the Custom Vision Object Detection project type currently only allow Limit Trial, hence unable to create the Training and Prediction services under the user's own Azure subscription which link back to this project type
Hello Where can i get my key's to fill this out :
<add key="webpages:Version" value="3.0.0.0" />
<add key="webpages:Enabled" value="false" />
<add key="ClientValidationEnabled" value="true" />
<add key="UnobtrusiveJavaScriptEnabled" value="true" />
<add key="ObjectDetectionTrainingKey" value="<your_trainning_key>" />
<add key="ObjectDetectionPredictionKey" value="<your_prediction_key>" />
<add key="ObjectDetectionProjectName" value="<Lida>" />
<add key="ObjectDetectionIterationName" value="<your_iteration>" />
<add key="HandwrittenTextSubscriptionKey" value="<your_key>" />
<add key="HandwrittenTextApiEndpoint" value="<your_endpoint>" />
<add key="AzureWebJobsStorage" value="<your_endpoint>" />
<add key="Sketch2CodeAppFunctionEndPoint" value="<your_endpoint>" />
<add key="Probability" value="30" />
<add key="storageUrl" value="<your_storage_url>" />
<add key="ComputerVisionDelay" value="120" />
Is it possible with kinect V1? Actually kinect v1 SDK supports 4 kinects per machine....how good is it right? And affordable too! The kinect SDK 1.8 supports green screening too.
Can I modify this code for kinect SDK 1.8? Will it work? And btw I am beginner in coding skills 🙃
Hi, at https://teleporthq.io we're working on the same topic.
https://twitter.com/TeleportHQio/status/1016341169662459904
We're able to export sketches in any type of code (React, React-Native, Vue, etc... )
We've partnered with https://moqups.com to test our technology in real-life conditions. We'd be really interested in starting a conversation about this topic.
Best regards,
Paul
I get an error message in console when I try to upload an image. The message is:
Failed to load resource: the server responded with a status of 500 (Internal Server Error)
i have installed SnipInsights-1.0.0.0 but i'm still confused about the environment on MonoDevelop i need some help to get started with SnipInsights on ubuntu ?
Text to speech does not work on the introduction message. It only activates when we press on the microphone and hence it starts working from promt messages .
I want to build a bot which talks from the very first introduction message. How to do this?
Missing : Program.cs file to train Custom Vision Object Detection Model.
https://aischool.microsoft.com/en-us/services/learning-paths/sketch2code/sketch2code-lab/train-an-object-detection-model
How to run this project in my pc ?
Hello,
Thank you for this awesome feature.
How to change the option for the generate code language from HTML to XAML ?
Thanks.
Hi, was finally able to get the Virtual Stage to run but I'm getting mattes that are completely unusable. The setup is I'm standing in front of a green screen, wearing a bluejean shirt and pants. It has successfully eliminated the entire background and seems to be keeping skintones alright, but there are big holes in my shirt and pants. Any idea what's going on?
I tried to build Speaker.Recorder in VS2019 ,but I 'm facing the issue that https://dotnet.myget.org/F/uwpcommunitytoolkit/api/v3/index.json doesn't exist.
I'd like to know how to cope with this issue.
Thanks.
I hope there is still someone answering questions for this project.
I was trying to build this project, got an error in DecodedKinectCaptureFrame.cs file.
VirtualStage\VirtualStage\Speaker.Recorder\Speaker.Recorder\Kinect\DecodedKinectCaptureFrame.cs(63,63,63,69): error CS1061: 'Image' does not contain a definition for 'Handle' and no accessible extension method 'Handle' accepting a first argument of type 'Image' could be found (are you missing a using directive or an assembly reference?)
This is no definition for the Handle in the image. Any ideas to solve this?
Thanks
Using Postman version: 7.2.0
I import the collection & environment files.
can't find "Create Data Source request from the collection".
Browse->Collections->Azure Search: There is no Create Data Source
Sorry I don't know whether this is the right place to ask, but the online service of sketch2code: https://sketch2code.azurewebsites.net/ seems not working. I'm trying to upload a image but the browser gives me 500 error...And the page has no notification about this error.
I am not managing to make the chatbot recognize my speech input as German. I managed to change the text-to-speech to German, but when I talk into the mic the bot thinks I talk English. I saw that I can possibly change it via
config.SpeechRecognitionLanguage = "de-de";
But honestly, I have no clue where to put it.
Am I right that I have to change something in the SpeechModule.js?
I solved it by changing
o = i.SpeechResultFormat.Simple, s = e.locale || "en-US"
to
o = i.SpeechResultFormat.Simple, s = "de-DE"
in wwwroot/lib/CognitiveServices.js
There are important files that Microsoft projects should all have that are not present in this repository. A pull request has been opened to add the missing file(s). When the pr is merged this issue will be closed automatically.
Microsoft teams can learn more about this effort and share feedback within the open source guidance available internally.
In the "Deploy to Azure from Visual Studio" section for the "Adding Speech support (Text-to-Speech and Speech-to-Text)" section, the instruction says:
"Open the appsettings.json file.
Replace the values of MicrosoftAppId and MicrosoftAppPassword with the values you got from Azure."
However, the appsettings.json file doesn't have any properties like that.
Also, are we supposed to delete the properties still there for the settings that we deleted in step 5: LuisAPIHostName, LuisAPIKey, LuisAppId ?
I added 2 new props in appsetting.json for the MicrosoftAppId and MicrosoftAppPassword, but it doesn't seem to work...
Hi There -- thanks for putting this up... Amazing concept.
I tried uploading attached sketch (both hand-sketch & from MSPaint) but the results were a little different. Is there a guideline for preparing the sketch?
Also, is this project in a state wherein it can be used in a real-world program? or it is at research-paper level?
Appreciate the response.
I think you should improve the mobile version because in PC almost everything is good but not in mobile devices, there are many bad sections in small devices.
https://sketch2code.azurewebsites.net/
Service Unavailable
HTTP Error 503. The service is unavailable.
Hi
I'm having error when running processing the pictures.
Error screen reads as attached file 1.png
processing photos while having this error as attached file 2.png
I saved this project under below route
C:\Users\edz\Desktop\ailab-master\VirtualStage\BackgroundMatting
would you be able to help solving this issue?
thanks.
bg_matting.py is referencing proportional_threshold but it doesn't seem to be part of the project. Based on naming conventions in project it seems like a proportional_threshold.py file is missing? Or is this imported from somewhere else?
File ".\bg_matting.py", line 9, in
from proportional_threshold import proportional_split, proportional_merge
ModuleNotFoundError: No module named 'proportional_threshold'
In Step3.cshtml and Step5.cshtml there are several places that use hardcoded URLs to your blobstorage. When someone tries to get the code up and running for himself, this will result in the app not showing up selected images or its predicted output.
Hi there,
is this project suitable for realtime video frame streaming? Say I want to have matted output sent to Unity or another DCC package.
Unhandled Exception: Microsoft.Rest.HttpOperationException: Operation returned an invalid status code 'Unauthorized'
at Microsoft.Azure.CognitiveServices.Vision.CustomVision.Training.TrainingApi.GetDomainsWithHttpMessagesAsync(Dictionary`2 customHeaders, CancellationToken cancellationToken)
at Microsoft.Azure.CognitiveServices.Vision.CustomVision.Training.TrainingApiExtensions.GetDomainsAsync(ITrainingApi operations, CancellationToken cancellationToken)
at Microsoft.Azure.CognitiveServices.Vision.CustomVision.Training.TrainingApiExtensions.GetDomains(ITrainingApi operations)
at Import.Program.Main(String[] args) in ~/AISchoolTutorials-master/sketch2code/Import/Program.cs:line 29
I keep getting this error when I try to upload the JSON data. My key is correct although in step 2 in the tutorial I couldn't complete steps 10,11,12 as there is no notepad in the quick settings tab.
Any help is greatly appreciated. Thanks!
Hi,
Would it be possible to update the README for Sketch2Code and explain how to train the sample model using the dataset provided? Is there a fast way of linking the tags in dataset.json to the images in our own Custom Vision project or do we have to manually tag them?
Thanks
On a new clone of your repo, I can't get the model to train. There's a type mismatch in the updates when building the optimizer.
Running on conda on macOS (using CPU). I didn't mess with any files, just added in the .txt file. I tried updating the n_words, changing various things in the config file but no luck.
Any help would be much appreciated. Thanks, Tom
Error message:
Building optimizers...
Traceback (most recent call last):
File "/anaconda3/envs/storytelling/lib/python3.5/site-packages/theano/compile/pfunc.py", line 193, in rebuild_collect_shared
allow_convert=False)
File "/anaconda3/envs/storytelling/lib/python3.5/site-packages/theano/tensor/type.py", line 234, in filter_variable
self=self))
TypeError: Cannot convert Type TensorType(float64, matrix) (of Variable Elemwise{add,no_inplace}.0) into Type TensorType(float32, matrix). You can try to manually convert Elemwise{add,no_inplace}.0 into a TensorType(float32, matrix).
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "training.py", line 6, in
EncTrainer.train()
File "/Users/tom/Documents/development/ailab/Pix2Story/source/training/train_encoder.py", line 40, in train
trainer(self.text, self.training_options)
File "/Users/tom/Documents/development/ailab/Pix2Story/source/skipthoughts_vectors/training/train.py", line 128, in trainer
f_grad_shared, f_update = eval(optimizer)(lr, tparams, grads, inps, cost)
File "/Users/tom/Documents/development/ailab/Pix2Story/source/skipthoughts_vectors/encdec_functs/optim.py", line 40, in adam
f_update = theano.function([lr], [], updates=updates, on_unused_input='ignore', profile=False)
File "/anaconda3/envs/storytelling/lib/python3.5/site-packages/theano/compile/function.py", line 317, in function
output_keys=output_keys)
File "/anaconda3/envs/storytelling/lib/python3.5/site-packages/theano/compile/pfunc.py", line 449, in pfunc
no_default_updates=no_default_updates)
File "/anaconda3/envs/storytelling/lib/python3.5/site-packages/theano/compile/pfunc.py", line 208, in rebuild_collect_shared
raise TypeError(err_msg, err_sug)
TypeError: ('An update must have the same type as the original shared variable (shared_var=<TensorType(float32, matrix)>, shared_var.type=TensorType(float32, matrix), update_val=Elemwise{add,no_inplace}.0, update_val.type=TensorType(float64, matrix)).', 'If the difference is related to the broadcast pattern, you can call the tensor.unbroadcast(var, axis_to_unbroadcast[, ...]) function to remove broadcastable dimensions.')
Any work around for the web scrapper script failing to download files on the pix2story
Anyone know about it?
So I have a bot that is solely buttons. So you start off and have for example, choice buttons A B C and D. You click button A and it will prompt you with more options: A1 A2 A3 and A4 and so on so on. I want to add “start over” and “go back” Buttons to these, but I’m not sure how. Any ideas?
Quick question : who is responsible for training the model to detect/recognize patterns? Where is the "hey .. this is not a textbox but a dropdown" section?
Is that simply performed by Azure Cognitive Services or is it done anywhere in the code?
Many of the Project Titles in the readme are directly to incorrect URLs.
Link some folders in the documentation. PR #17 fixes this
It fails when reconstructing the video. It is looking for timestamfile.txt file which is not there. What am I missing? I assume this file should be created/generated somewhere? I am running with --no_kinect_mask option
Traceback (most recent call last):
File ".\bg_matting.py", line 104, in
reconstruct_all_video(original_videos, args.output_dir, output_suffix, outputs)
File "C:\ailab-master\VirtualStage\BackgroundMatting\reconstruct.py", line 7, in reconstruct_all_video
reconstruct_video(video, output_dir, suffix, outputs_list)
File "C:\ailab-master\VirtualStage\BackgroundMatting\reconstruct.py", line 22, in reconstruct_video
video, out_path + suffix, os.path.basename(video) + suffix, o,
File "C:\ailab-master\VirtualStage\BackgroundMatting\reconstruct.py", line 60, in write_output_timestamp_file
ts_in = open(os.path.join(input, "timestampfile.txt"), "rt")
FileNotFoundError: [Errno 2] No such file or directory: 'C:\test\testvideo\timestampfile.txt'
I can't find download button.
Actually I'm not used to github
Microsoft links at the bottom of the pages (Microsoft Logo/name) in the JFK online demo are broken. They point to https://jfk-demo.azurewebsites.net/www.microsoft.com instead of www.microsoft.com
👋 Hello, @tarasha, @macastejon, @gsegares - a potential medium severity OS Command Injection (CWE-78) vulnerability in your repository has been disclosed to us.
1️⃣ Visit https://huntr.dev/bounties/1-other-microsoft/ailab for more advisory information.
2️⃣ Sign-up to validate or speak to the researcher for more assistance.
3️⃣ Propose a patch or outsource it to our community - whoever fixes it gets paid.
✏️ NOTE: If we don't hear from you in 14 days, we will proactively source a fix for this vulnerability (and open a PR) to ensure community safety.
Join us on our Discord and a member of our team will be happy to help! 🤗
Speak to a member of our team: @JamieSlome
This issue was automatically generated by huntr.dev - a bug bounty board for securing open source code.
The main README of the project needs to be more descriptive, as it only has the 'Contributing' heading.
I am curious to use it, but the site is not working. Please make sure it is up and running.
Thank you for your effort and I love this research. It's really amazing! Could you please share the handwritten design dataset and annotations?
It's really important but difficult to collect for individual. I'll appreciate you if you can make it public.
in 90deg,photo can not recognized,and then it generate two boxes whth line
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.