Giter Site home page Giter Site logo

kimera-vio-evaluation's People

Contributors

jingnanshi avatar marcusabate avatar nathanhhughes avatar tonirv avatar violetteavi avatar yunzc avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kimera-vio-evaluation's Issues

Plotting notebooks need a library

The plotting notebooks have code duplication because many helper functions are copied over. For these shared functions it would be useful to have a small python library available to the notebooks in the same folder, containing those functions. This would make them much cleaner.

The downside is this would be a bit less transparent to users who want to quickly open up notebooks and run tests, not knowing they might need to alter some functions.

PIM plotting currently in wrong branch

Description:
PIM plotting needs its own branch. Right now it is in the feature/lcd_support branch, which will hopefully be merged soon but regardless will take longer than a simple update to add PIM plotting.

regression_test.py import error

Description:
regression_test.py can not import the function run_dataset

Command:

./evaluation/regression_tests.py -r -a --plot --save_plots --save_boxplots --save_results ../regress_ev.yaml

Console output:

Traceback (most recent call last):                              
  File "./evaluation/regression_tests.py", line 13, in <module> 
    from evaluation.evaluation_lib import run_dataset           
ImportError: cannot import name run_dataset                     

Additional files:
Please attach all the files needed to reproduce the error.

Please give also the following information:

  • evo version: https://github.com/ToniRV/evo-1
  • Python version used:python2.7
  • operating system and version (e.g. Ubuntu 16.04 or Windows 10): Ubuntu18.04
  • did you change the source code? (yes / no):no

executable_path: '../Kimera-VIO/buil/stereoVIOEuroc'                
vocabulary_path: '../Kimera-VIO/vocabulary/ORBvoc.yml'              
results_dir: 'Kimera-VIO-Evaluation/results'             
params_dir: '../Kimera-VIO/params/Euroc'                            
dataset_dir: '../Kimera-VIO/script/euroc'                            
regression_tests_dir: 'Kimera-VIO-Evaluation/regression_tests'                                                                                                 
                                                                                                      
datasets_to_run:                                                                                      
 - name: V1_01_easy                                                                                   
   use_lcd: false                                                                                     
   plot_vio_and_pgo: true                                                                             
   segments: [1]                                                                                      
   pipelines: ['S']                                                                                   
   discard_n_start_poses: 10                                                                          
   discard_n_end_poses: 10                                                                            
   initial_frame: 10                                                                                  
   final_frame: 22000                                                                                 
   parallel_run: true                                                                                 


IndexError segfault

Getting this segfault:

image (16)

Adding a silly -1 at the end, but this should not happen, not sure why.

ids = range(int(discard_n_start_poses), int(num_of_poses - discard_n_end_poses - 1), 1)

main_evaluation.py run error

Description:
main_evaluation.py run error about vio_performance_template.html

Command:

./evaluation/main_evaluation.py -r -a --plot --verbose_sparkvio --save_plots --save_boxplots --save_results ../main_ev.yaml

Console output:


Traceback (most recent call last):                                                                    
  File "./evaluation/main_evaluation.py", line 72, in <module>                                        
    if run(args):                                                                                     
  File "./evaluation/main_evaluation.py", line 21, in run                                             
    dataset_evaluator = DatasetEvaluator(experiment_params, args, extra_flagfile_path)                
  File "/usr/local/lib/python2.7/dist-packages/evaluation/evaluation_lib.py", line 291, in __init__   
    self.website_builder = evt.WebsiteBuilder(self.results_dir)                                       
  File "/usr/local/lib/python2.7/dist-packages/evaluation/tools/website_utils.py", line 27, in __init_
_                                                                                                     
    self.boxplot_template = self.env.get_template('vio_performance_template.html')                    
  File "/usr/local/lib/python2.7/dist-packages/jinja2/environment.py", line 883, in get_template      
    return self._load_template(name, self.make_globals(globals))                                      
  File "/usr/local/lib/python2.7/dist-packages/jinja2/environment.py", line 857, in _load_template    
    template = self.loader.load(self, name, globals)                                                  
  File "/usr/local/lib/python2.7/dist-packages/jinja2/loaders.py", line 115, in load                  
    source, filename, uptodate = self.get_source(environment, name)                                   
  File "/usr/local/lib/python2.7/dist-packages/jinja2/loaders.py", line 249, in get_source            
    raise TemplateNotFound(template)                                                                  
jinja2.exceptions.TemplateNotFound: vio_performance_template.html                                     


Additional files:
Please attach all the files needed to reproduce the error.

Please give also the following information:


executable_path: '../Kimera-VIO/buil/stereoVIOEuroc'      
vocabulary_path: '../Kimera-VIO/vocabulary/ORBvoc.yml'    
results_dir: 'Kimera-VIO-Evaluation/reuslts'                                 
params_dir: '../Kimera-VIO/params/Euroc'                        
dataset_dir: '../Kimera-VIO/script/euroc'                  
                                                                                            
datasets_to_run:                                                                            
 - name: V1_01_easy                                                                         
   use_lcd: false                                                                           
   plot_vio_and_pgo: true                                                                   
   segments: [1]                                                                            
   pipelines: ['S']                                                                         
   discard_n_start_poses: 10                                                                
   discard_n_end_poses: 10                                                                  
   initial_frame: 10                                                                        
   final_frame: 22000                                                                       
   parallel_run: true                                                                       


Frontend Jupyter error

Description: In Frontend Jupyter, everything works fine except from Mono RANSAC. I get the below error, have you met this problem?

Also, the paths which i defined in relative Jupyter are:
Command:

vio_output_dir = "/home/george/Kimera-VIO-Evaluation/results/V1_01_easy/Euroc"
gt_data_file = "/home/george/datasets/EuRoC/ASL_Dataset_format/V1_01_easy/mav0/state_groundtruth_estimate0/data.csv"
left_cam_calibration_file = "/home/george/datasets/EuRoC/ASL_Dataset_format/V1_01_easy/mav0/cam0/sensor.yaml"

Console output:


KeyError Traceback (most recent call last)
in ()
1 # Generate some trajectories for later plots
2 # Convert to evo trajectory objects
----> 3 traj_ref_unassociated = pandas_bridge.df_to_trajectory(gt_df)
4
5 # Use the mono ransac file as estimated trajectory.

/home/george/venv/local/lib/python2.7/site-packages/evo/tools/pandas_bridge.pyc in df_to_trajectory(df)
50 if not isinstance(df, pd.DataFrame):
51 raise TypeError("pandas.DataFrame or derived required")
---> 52 positions_xyz = df.loc[:,['x','y','z']].to_numpy()
53 quaternions_wxyz = df.loc[:,['qw','qx','qy','qz']].to_numpy()
54 # NOTE: df must have timestamps as index

/home/george/venv/local/lib/python2.7/site-packages/pandas/core/indexing.pyc in getitem(self, key)
1492 except (KeyError, IndexError, AttributeError):
1493 pass
-> 1494 return self._getitem_tuple(key)
1495 else:
1496 # we by definition only have the 0th axis

/home/george/venv/local/lib/python2.7/site-packages/pandas/core/indexing.pyc in _getitem_tuple(self, tup)
886 continue
887
--> 888 retval = getattr(retval, self.name)._getitem_axis(key, axis=i)
889
890 return retval

/home/george/venv/local/lib/python2.7/site-packages/pandas/core/indexing.pyc in _getitem_axis(self, key, axis)
1900 raise ValueError('Cannot index with multidimensional key')
1901
-> 1902 return self._getitem_iterable(key, axis=axis)
1903
1904 # nested tuple slicing

/home/george/venv/local/lib/python2.7/site-packages/pandas/core/indexing.pyc in _getitem_iterable(self, key, axis)
1203 # A collection of keys
1204 keyarr, indexer = self._get_listlike_indexer(key, axis,
-> 1205 raise_missing=False)
1206 return self.obj._reindex_with_indexers({axis: [keyarr, indexer]},
1207 copy=True, allow_dups=True)

/home/george/venv/local/lib/python2.7/site-packages/pandas/core/indexing.pyc in _get_listlike_indexer(self, key, axis, raise_missing)
1159 self._validate_read_indexer(keyarr, indexer,
1160 o._get_axis_number(axis),
-> 1161 raise_missing=raise_missing)
1162 return keyarr, indexer
1163

/home/george/venv/local/lib/python2.7/site-packages/pandas/core/indexing.pyc in _validate_read_indexer(self, key, indexer, axis, raise_missing)
1244 raise KeyError(
1245 u"None of [{key}] are in the [{axis}]".format(
-> 1246 key=key, axis=self.obj._get_axis_name(axis)))
1247
1248 # We (temporarily) allow for some missing keys with .loc, except in

KeyError: u"None of [Index([u'x', u'y', u'z'], dtype='object')] are in the [columns]"

Additional files:
Please attach all the files needed to reproduce the error.

Please give also the following information:

  • evo version number shown by git rev-parse HEAD
  • Python version used: 2.7 of venv
  • operating system and version (e.g. Ubuntu 16.04 or Windows 10): Ubuntu 18
  • did you change the source code? (yes / no): NO

sudo apt-get install virtualenv
virtualenv -p python2.7 ./venv
source ./venv/bin/activate

git clone https://github.com/ToniRV/Kimera-VIO-Evaluation
cd Kimera-VIO-Evaluation
pip install .
python setup.py develop

git clone https://github.com/ToniRV/evo-1.git
cd evo-1
pip install . --upgrade --no-binary evo
cd ..

pandas_bridge.df_to_trajectory error in backend jupyter notebook

Description:
In backend notebook, the execution of

vio_output_dir = "/home/george/Kimera-VIO-Evaluation/results/V1_01_easy/Euroc/"
gt_data_file = "/home/george/datasets/EuRoC/ASL_Dataset_format/V1_01_easy/mav0/state_groundtruth_estimate0/data.csv"
gt_df = pd.read_csv(gt_data_file, sep=',', index_col=0)
gt_df = gt_df[~gt_df.index.duplicated()]
traj_ref_complete = pandas_bridge.df_to_trajectory(gt_df)

returns error. Why does it happen?

Console output:


---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
<ipython-input-43-cdf3f5517f57> in <module>()
      1 # Convert the gt relative-pose DataFrame to a trajectory object.
----> 2 traj_ref_complete = pandas_bridge.df_to_trajectory(gt_df)
      3 
      4 # Use the backend poses as trajectory.
      5 traj_est_unaligned = pandas_bridge.df_to_trajectory(output_poses_df)

/home/george/venv/local/lib/python2.7/site-packages/evo/tools/pandas_bridge.pyc in df_to_trajectory(df)
     50     if not isinstance(df, pd.DataFrame):
     51         raise TypeError("pandas.DataFrame or derived required")
---> 52     positions_xyz = df.loc[:,['x','y','z']].to_numpy()
     53     quaternions_wxyz = df.loc[:,['qw','qx','qy','qz']].to_numpy()
     54     # NOTE: df must have timestamps as index

/home/george/venv/local/lib/python2.7/site-packages/pandas/core/indexing.pyc in __getitem__(self, key)
   1492             except (KeyError, IndexError, AttributeError):
   1493                 pass
-> 1494             return self._getitem_tuple(key)
   1495         else:
   1496             # we by definition only have the 0th axis

/home/george/venv/local/lib/python2.7/site-packages/pandas/core/indexing.pyc in _getitem_tuple(self, tup)
    886                 continue
    887 
--> 888             retval = getattr(retval, self.name)._getitem_axis(key, axis=i)
    889 
    890         return retval

/home/george/venv/local/lib/python2.7/site-packages/pandas/core/indexing.pyc in _getitem_axis(self, key, axis)
   1900                     raise ValueError('Cannot index with multidimensional key')
   1901 
-> 1902                 return self._getitem_iterable(key, axis=axis)
   1903 
   1904             # nested tuple slicing

/home/george/venv/local/lib/python2.7/site-packages/pandas/core/indexing.pyc in _getitem_iterable(self, key, axis)
   1203             # A collection of keys
   1204             keyarr, indexer = self._get_listlike_indexer(key, axis,
-> 1205                                                          raise_missing=False)
   1206             return self.obj._reindex_with_indexers({axis: [keyarr, indexer]},
   1207                                                    copy=True, allow_dups=True)

/home/george/venv/local/lib/python2.7/site-packages/pandas/core/indexing.pyc in _get_listlike_indexer(self, key, axis, raise_missing)
   1159         self._validate_read_indexer(keyarr, indexer,
   1160                                     o._get_axis_number(axis),
-> 1161                                     raise_missing=raise_missing)
   1162         return keyarr, indexer
   1163 

/home/george/venv/local/lib/python2.7/site-packages/pandas/core/indexing.pyc in _validate_read_indexer(self, key, indexer, axis, raise_missing)
   1244                 raise KeyError(
   1245                     u"None of [{key}] are in the [{axis}]".format(
-> 1246                         key=key, axis=self.obj._get_axis_name(axis)))
   1247 
   1248             # We (temporarily) allow for some missing keys with .loc, except in

KeyError: u"None of [Index([u'x', u'y', u'z'], dtype='object')] are in the [columns]"


Additional files:
Please attach all the files needed to reproduce the error.

Please give also the following information:

  • evo version number shown by git rev-parse HEAD evo-1, the forked one
  • Python version used: 2.7
  • operating system and version (e.g. Ubuntu 16.04 or Windows 10): Ubuntu 18
  • did you change the source code? (yes / no): no

# remove this line and paste your config HERE

CI Builds Always Fail

Description:
CI builds on travis always fail because we haven't updated the CI's installation of evo-1 to the latest master. System should clone evo-1 and build from source always.

ORBvoc.yml OpenCV 3.4.2 error

Description:
i have a problem with opencv in evaluation code, which blocks my whole procedure of KIMERA experimentation. I followed the below steps:

STEP 1
sudo apt-get install virtualenv
virtualenv -p python2.7 ./venv
source ./venv/bin/activate

STEP 2
git clone https://github.com/ToniRV/Kimera-VIO-Evaluation
cd Kimera-VIO-Evaluation
pip install .
python setup.py develop

STEP 3
move content of website in
/home/george/venv/lib/python2.7/site-packages/website

STEP 4
creation of my_example_euroc.yaml as follows:

executable_path: '/home/george/catkin_ws/build/kimera_vio/stereoVIOEuroc'
vocabulary_path: '/home/george/catkin_ws/src/Kimera-VIO/vocabulary/ORBvoc.yml'
params_dir: '/home/george/catkin_ws/src/Kimera-VIO/params'
dataset_dir: '/home/george/datasets/EuRoC/ASL_Dataset_format'
results_dir: '/home/george/Kimera-VIO-Evaluation/results'

datasets_to_run:

  • name: V1_01_easy
    use_lcd: true
    plot_vio_and_pgo: true
    segments: [1]
    pipelines: ['Euroc']
    discard_n_start_poses: 0
    discard_n_end_poses: 0
    initial_frame: 10
    final_frame: 220

STEP 5
Yamelize dataset with %YAML:1.0

Command:

cd Kimera-VIO-Evaluation
./evaluation/main_evaluation.py -r -a --save_plots --save_results 
--save_boxplots experiments/my_example_euroc.yaml


Console output:


(venv) george@dell:~/Kimera-VIO-Evaluation$ 
./evaluation/main_evaluation.py -r -a --save_plots --save_results 
--save_boxplots experiments/my_example_euroc.yaml
   0%|                                                                    

                           | 0/1 [00:00<?, ?it/s]I0227 19:55:57.776446 
8146 evaluation_lib.py:309] Run dataset: V1_01_easy
  Run pipeline: Euroc
//////////terminate called after throwing an instance of 'cv::Exception'
   what():  OpenCV(3.4.2) 
/home/george/catkin_ws/build/opencv3_catkin/opencv3_src/modules/core/src/persistence_yml.cpp:51: 
error: (-212:Parsing error) icvYMLSkipSpaces in function 
'/home/george/catkin_ws/src/Kimera-VIO/vocabulary/ORBvoc.yml(1335677): 
Too long string or a last string w/o newline'

||||||||||Aborted (core dumped)
E0227 19:55:58.884696 8146 evaluation_lib.py:168] Failed pipeline run!
  Finished evaluation for dataset: V1_01_easy
I0227 19:55:58.885215 8146 evaluation_lib.py:312]  Dataset: V1_01_easy 
failed!!
   0%|                                                                    

                           | 0/1 [00:01<?, ?it/s]
Traceback (most recent call last):
   File "./evaluation/main_evaluation.py", line 76, in <module>
     if run(args):
   File "./evaluation/main_evaluation.py", line 26, in run
     dataset_evaluator.evaluate()
   File 
"/home/george/venv/local/lib/python2.7/site-packages/evaluation/evaluation_lib.py", 
line 313, in evaluate
     raise Exception("Failed to run dataset %s." % dataset['name'])
Exception: Failed to run dataset V1_01_easy.


Using pandas_bridge to load csv's

The evo.tools.file_interface methods are clunky and unreliable. With evo.tools.pandas_bridge in the relevant PR, we can read CSV to a pandas DataFrame and then convert the DataFrame to a trajectory. Despite the extra computation, this is potentially more future proof and doesn't require us to use evo's file_interface stuff, which requires very specific formats for the input csv.

The order would be:

import pandas as pd
from evo.tools import pandas_bridge

df = pd.read_csv("/path/to/file.csv", sep=",", index=0)
traj = pandas_bridge.df_to_trajectory(df)

Very simple, only requires an extra step.

Yaml not dumped correctly: nested collections messing up.

Description:

From pyyaml documentation:
Dictionaries without nested collections are not dumped correctly
Why does

import yaml
document = """
  a: 1
  b:
    c: 3
    d: 4
"""
print yaml.dump(yaml.load(document))

give

a: 1
b: {c: 3, d: 4}

(see #18, #24)?

It’s a correct output despite the fact that the style of the nested mapping is different.

By default, PyYAML chooses the style of a collection depending on whether it has nested collections. If a collection has nested collections, it will be assigned the block style. Otherwise it will have the flow style.

If you want collections to be always serialized in the block style, set the parameter default_flow_style of dump() to False. For instance,

>>> print yaml.dump(yaml.load(document), default_flow_style=False)
a: 1
b:
  c: 3
  d: 4

Console output:
Actually, since vio parameters has nested collections (Gravity!) but tracker params hasn't, you need to alternate between the followin styles:

>>> print yaml.dump({'traits': ['ONE_HAND', 'ONE_EYE']})
traits: [ONE_HAND, ONE_EYE]

>>> print yaml.dump({'traits': ['ONE_HAND', 'ONE_EYE']}, default_flow_style=False)
traits:
- ONE_HAND
- ONE_EYE

>>> print yaml.dump({'traits': ['ONE_HAND', 'ONE_EYE']}, default_flow_style=True)
{traits: [ONE_HAND, ONE_EYE]}

So: use default_flow_style= False for tracker because it has not nested collections and thus will default to flow style.
But: use no default whatsoever for vio params because it has nested collections and thus will default to flow style, but if you explicitly call default_flow_style=True, it will put braces and mess up everything.

Attribute error

Description:
I tried to run the jupyter notebook frontend and got an error.

Command:

jupyter notebook

Console output:

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-5-e0819567fe5e> in <module>()
      8     plt.show()
      9 else:
---> 10     evt.draw_feature_tracking_stats(df_stats, True)

AttributeError: 'module' object has no attribute 'draw_feature_tracking_stats'

Additional files:
The zipped output logs:
EuRoC.zip

Please give also the following information:

  • evo version number shown by git rev-parse HEAD
  • Python version used:
  • operating system and version (e.g. Ubuntu 16.04 or Windows 10):
  • did you change the source code? (yes / no):

# remove this line and paste your config HERE

Plotting notebooks need to be Unit Tested!

Kind of related to #7
It is particularly important that we unit test our evaluation tools with toy example data.
Otherwise, we might be at risk of misinterpreting the data that we get from evaluation and arriving to the wrong conclusions.

Results yaml files not human readable

Description:
the results.yaml file that is the output of the evaluation pipeline includes data on ape, rpe trans, and rpe rot for each pipeline of each dataset. These are also used to create the aggregate ape boxplots and tex tables.

However, these files store the data from evo.core.metric as numpy arrays. When calling yaml.dump on non-python-primitives like this, the module stores the data in a format only readable by the same yaml loader. This isn't a problem if we use the same module to dump and load, but it would be a problem if:

  • Anyone wants to read the results file themselves
  • Anyone wants to load the results file using some other loader from another library or even another language

Example:

absolute_errors: !!python/object:evo.core.result.Result
  info:
    est_name: estimate
    label: APE (m)
    ref_name: reference
    title: APE w.r.t. translation part (m)
  np_arrays:
    error_array:
    - !!python/object/apply:numpy.core.multiarray.scalar
      - &id001 !!python/object/apply:numpy.dtype
        args:
        - f8
        - 0
        - 1
        state: !!python/tuple
        - 3
        - <
        - null
        - null
        - null
        - -1
        - -1
        - 0
      - !!binary |
        av4nMtSZiz8=
    - !!python/object/apply:numpy.core.multiarray.scalar
      - *id001
      - !!binary |
        7RnRmUE5iT8=
    - !!python/object/apply:numpy.core.multiarray.scalar
      - *id001
      - !!binary |
        ooVHZvhrhD8=

`confirm_overwrite` parameter is sent downstream but never obeyed

In several places we pass confirm_overwrite as a parameter, which if True asks the user to confirm before overwriting results files.

However, this parameter is not set anywhere in the experiments file or in any configs. Also, in some places where it should be passed along, it is actually not passed along at all.

Either we make this a config parameter and properly use it (recommend it be a class member instead of passed on a per-dataset basis), OR we get rid of it entirely because we always set it to False anyway.

@ToniRV thoughts?

df_to_trajectory not a member of pandas_bridge

Description:
Updates to evo, the original repo has removed/moved the function df_to_trajectory in pandas_bridge. Hence VIO-Evaluation fails.

Command:

./evaluation/main_evaluation.py -r -a --save_plots --save_results --save_boxplots experiments/example_euroc.yaml

Console output:


Traceback (most recent call last):
  File "./evaluation/main_evaluation.py", line 76, in <module>
    if run(args):
  File "./evaluation/main_evaluation.py", line 26, in run
    dataset_evaluator.evaluate()
  File "/root/Kimera-VIO-Evaluation/venv/local/lib/python2.7/site-packages/evaluation/evaluation_lib.py", line 317, in evaluate
    self.evaluate_dataset(dataset)  File "/root/Kimera-VIO-Evaluation/venv/local/lib/python2.7/site-packages/evaluation/evaluation_lib.py", line 334, in evaluate_dataset
    if not self.__evaluate_run(pipeline_type, dataset):
  File "/root/Kimera-VIO-Evaluation/venv/local/lib/python2.7/site-packages/evaluation/evaluation_lib.py", line 374, in __evaluate_run
    dataset_name, discard_n_start_poses, discard_n_end_poses)
  File "/root/Kimera-VIO-Evaluation/venv/local/lib/python2.7/site-packages/evaluation/evaluation_lib.py", line 412, in run_analysis
    traj_ref, traj_est_vio, traj_est_pgo = self.read_traj_files(traj_ref_path, traj_vio_path, traj_pgo_path)
  File "/root/Kimera-VIO-Evaluation/venv/local/lib/python2.7/site-packages/evaluation/evaluation_lib.py", line 586, in read_traj_files
    traj_ref = pandas_bridge.df_to_trajectory(pd.read_csv(traj_ref_path, sep=',', index_col=0))
AttributeError: 'module' object has no attribute 'df_to_trajectory'


Additional files:
No additional files necessary.

Please give also the following information:

  • evo version number shown by git rev-parse HEAD : evo-1 1.1.2
  • Python version used: 2.7
  • operating system and version (e.g. Ubuntu 16.04 or Windows 10): Ubuntu 18.04
  • did you change the source code? (yes / no): no

# remove this line and paste your config HERE 

Cannot analyze without first running

If you run main_evaluation.py experiments.yaml -a without the -r flag as well, you would expect DatasetEvaluator to evaluate the results it already has access to and not run the full pipeline.

However, the first thing DatasetEvaluator does is move output files from the /results/tmp_output/ folder to the specific pipeline folder. For instance, /results/MH_01_easy/S/output folder.

DatasetEvaluator checks the tmp_output folder and finds nothing, and throws an Exception.

This is not necessarily undesirable behavior: In the event that you want to perform analysis on data collected by running the executable yourself, you can always tell the executable to log to this location. However, this means you can only run analysis once with -a before getting this error.

So this is more of a question of how we want to handle these kinds of cases.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.