So, I updated this project (complete solution) to dotnet 6.0... The WebApi works, but the ConsoleApplication fails out with this error code:
/BERT-ML.NET-master/BERT.Console/bin/Debug/net6.0/BERT.Console "Bob is walking through the woods collecting blueberries and strawberries to make a pie." "What is his name?"
Unhandled exception. System.InvalidOperationException: Error initializing model :Microsoft.ML.OnnxRuntime.OnnxRuntimeException: [ErrorCode:InvalidProtobuf] Load model from Model/bertsquad-10.onnx failed:Protobuf parsing failed.
at Microsoft.ML.OnnxRuntime.NativeApiStatus.VerifySuccess(IntPtr nativeStatus)
at Microsoft.ML.OnnxRuntime.InferenceSession.Init(String modelPath, SessionOptions options, PrePackedWeightsContainer prepackedWeightsContainer)
at Microsoft.ML.OnnxRuntime.InferenceSession..ctor(String modelPath, SessionOptions options)
at Microsoft.ML.Transforms.Onnx.OnnxModel..ctor(String modelFile, Nullable`1 gpuDeviceId, Boolean fallbackToCpu, Boolean ownModelFile, IDictionary`2 shapeDictionary, Int32 recursionLimit, Nullable`1 interOpNumThreads, Nullable`1 intraOpNumThreads)
at Microsoft.ML.Transforms.Onnx.OnnxTransformer..ctor(IHostEnvironment env, Options options, Byte[] modelBytes)
---> Microsoft.ML.OnnxRuntime.OnnxRuntimeException: [ErrorCode:InvalidProtobuf] Load model from Model/bertsquad-10.onnx failed:Protobuf parsing failed.
at Microsoft.ML.OnnxRuntime.NativeApiStatus.VerifySuccess(IntPtr nativeStatus)
at Microsoft.ML.OnnxRuntime.InferenceSession.Init(String modelPath, SessionOptions options, PrePackedWeightsContainer prepackedWeightsContainer)
at Microsoft.ML.OnnxRuntime.InferenceSession..ctor(String modelPath, SessionOptions options)
at Microsoft.ML.Transforms.Onnx.OnnxModel..ctor(String modelFile, Nullable`1 gpuDeviceId, Boolean fallbackToCpu, Boolean ownModelFile, IDictionary`2 shapeDictionary, Int32 recursionLimit, Nullable`1 interOpNumThreads, Nullable`1 intraOpNumThreads)
at Microsoft.ML.Transforms.Onnx.OnnxTransformer..ctor(IHostEnvironment env, Options options, Byte[] modelBytes)
--- End of inner exception stack trace ---
at Microsoft.ML.Transforms.Onnx.OnnxTransformer..ctor(IHostEnvironment env, Options options, Byte[] modelBytes)
at Microsoft.ML.Transforms.Onnx.OnnxTransformer..ctor(IHostEnvironment env, String[] outputColumnNames, String[] inputColumnNames, String modelFile, Nullable`1 gpuDeviceId, Boolean fallbackToCpu, IDictionary`2 shapeDictionary, Int32 recursionLimit, Nullable`1 interOpNumThreads, Nullable`1 intraOpNumThreads)
at Microsoft.ML.Transforms.Onnx.OnnxScoringEstimator..ctor(IHostEnvironment env, String[] outputColumnNames, String[] inputColumnNames, String modelFile, Nullable`1 gpuDeviceId, Boolean fallbackToCpu, IDictionary`2 shapeDictionary, Int32 recursionLimit, Nullable`1 interOpNumThreads, Nullable`1 intraOpNumThreads)
at Microsoft.ML.OnnxCatalog.ApplyOnnxModel(TransformsCatalog catalog, String[] outputColumnNames, String[] inputColumnNames, String modelFile, Nullable`1 gpuDeviceId, Boolean fallbackToCpu)
at Microsoft.ML.Models.BERT.Onnx.OnnxModelConfigurator`1.SetupMlNetModel(IOnnxModel onnxModel) in /home/michieal/Desktop/Projects/projects/RiderProjects/BERT-ML.NET-master/Microsoft.ML.Models.BERT/Onnx/OnnxModelConfigurator.cs:line 25
at Microsoft.ML.Models.BERT.Onnx.OnnxModelConfigurator`1..ctor(IOnnxModel onnxModel) in /home/michieal/Desktop/Projects/projects/RiderProjects/BERT-ML.NET-master/Microsoft.ML.Models.BERT/Onnx/OnnxModelConfigurator.cs:line 15
at Microsoft.ML.Models.BERT.BertModel.Initialize() in /home/michieal/Desktop/Projects/projects/RiderProjects/BERT-ML.NET-master/Microsoft.ML.Models.BERT/BertModel.cs:line 30
at BERT.Console.Program.Main(String[] args) in /home/michieal/Desktop/Projects/projects/RiderProjects/BERT-ML.NET-master/BERT.Console/Program.cs:line 18
Process finished with exit code 134.
I'm pretty curious as to how to fix this, as it should either work for both projects in the solution, or fail for both. Mind you, I had to download a new copy of the bert-squad 10 onnx model, as the one that the shell file grabbed was corrupt and only 137k, instead of ~400mb.
My use case: Learning about BERT models, and having it do Q&A's as a front end for a generative text AI Model.