This repository contains the exercise files for the Create machine learning models learning path on Microsoft Learn.
We are not currently accepting external contributions for this repo. If you encounter any problems, please report an issue.
Exercise notebooks for Machine Learning modules on Microsoft Learn
License: MIT License
This repository contains the exercise files for the Create machine learning models learning path on Microsoft Learn.
We are not currently accepting external contributions for this repo. If you encounter any problems, please report an issue.
Thanks for the course. You may have this later i'm only up to 02.
You may want to add the html repr functionally to visualize the scikit-learn pipeplines
https://scikit-learn.org/stable/modules/compose.html#visualizing-composite-estimators
So in 01 - Data Exploration.ipynb
notebook we have
print(df_students.groupby(df_students.Pass)['StudyHours', 'Grade'].mean())
, but this gives an error below.
ValueError: Cannot subset columns with a tuple with more than one element. Use a list instead.
Solution:
Change it to print(df_students.groupby(df_students.Pass)[['StudyHours', 'Grade']].mean())
to fix and resolve it.
Note:
A very small but important fix.
I am getting an error in this code snippet:
from sklearn.tree import DecisionTreeRegressor
from sklearn.tree import export_text
# Train the model
model = DecisionTreeRegressor().fit(X_train, y_train)
print (model, "\n")
# Visualize the model tree
tree = export_text(model)
print(tree)
Error:
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-17-58c166972779> in <module>
1 from sklearn.tree import DecisionTreeRegressor
----> 2 from sklearn.tree import export_text
3
4 # Train the model
5 model = DecisionTreeRegressor().fit(X_train, y_train)
ImportError: cannot import name 'export_text'
Found a small bug in '05a - Deep Neural Networks (TensorFlow)'.
First notebook cell:
# The dataset is too small to be useful for deep learning
# So we'll oversample it to triple its size
for i in range(1,3):
penguins = penguins.append(penguins)
This is not correct, as this will not triple its size.
Possible Fix:
# The dataset is too small to be useful for deep learning
# So we'll oversample it to triple its size
penguins_3x = pd.DataFrame()
for i in range(3):
penguins_3x = penguins_3x.append(penguins)
penguins = penguins_3x
https://github.com/MicrosoftDocs/ml-basics/blob/master/local-setup.md states which packages to use. It would be helpful to provide a requirements file in either a .txt for .yaml form to simplify package installation.
Hello team,
I need a small explanation regarding this code section in the Flight solutions in the Challenges folder.
Thank you !
Hi Team,
Isn't issue, but just that I love penguin and I found incorrect species name :
penguin_classes = ['Amelie', 'Gentoo', 'Chinstrap']
Is Adelie and not Amelie :)
Regards,
Faiçal
Torch CrossEntropyLoss expects logits instead of normalized probabilities.
When I try to run the cells 32: print(df_students.groupby(df_students.Pass).Name.count()) and cell 33: print(df_students.groupby(df_students.Pass)['StudyHours', 'Grade'].mean())
I get this error: ValueError: Grouper for '<class 'pandas.core.frame.DataFrame'>' not 1-dimensional
I am using the Python 3.6 - AzureML Kernel.
Thank you
There's a typo on cell 10 at 05b - Convolutional Neural Networks (Tensorflow).ipynb:
Note: We're only using 5 epochs to minimze the training [...]
Should be:
Note: We're only using 5 epochs to minimize the training [...]
The two notebooks 05a - Deep Neural Networks (PyTorch).ipynb
and 05a - Deep Neural Networks (TensorFlow).ipynb
contain the following piece of code:
# The dataset is too small to be useful for deep learning
# So we'll oversample it to increase its size
for i in range(1,3):
penguins = penguins.append(penguins)
This creates a new dataframe that contains four copies of each row of the original dataframe. Since this happens before the training/test split, the probability of a row of the original dataframe to be present in both training and test set is approximately 0.75. In other words, one can expect 3/4 of the original rows to be present in both sets.
This constitutes a leakage of information from the test set into the training set, which renders the test set incapable of assessing the generalization capability of the trained model. In the case of the penguin toy dataset, this does not matter much: The three species appear to be well-separated in feature space, so that overfitting is not an immediate concern. Still, mixing training and test data is bad practice and should not be taught to ML beginners.
I therefore suggest the removal of the piece of code shown above. Since the model is no longer exposed to multiple copies of each row in one epoch of training, the number of epochs has to be increased to achieve the same test set accuracy. Training for 100 instead of 50 epochs worked well in my tests.
"Grade vs Salary" comment should be "Grade vs StudyHours"
The link to the challenge is not valid anymore and goes to '404 - page not found'.
Hi,
I have followed the steps as mentioned. However, when I try opening any of the ipynb files, I see a message saying reconnecting to kernel and that never happens. I am unable to continue. Could you please help? Am I missing anything?
Running
base_model = keras.applications.resnet.ResNet50(weights='imagenet', include_top=False, input_shape=(224,224,3))
Gives
InternalError: CUDA runtime implicit initialization on GPU:0 failed. Status: out of memory
Full Traceback and enviornment below
TensorFlow version: 2.3.1
Keras version: 2.4.0
Note i'm running locally and I do have a GPU so not sure if that is the cause
---------------------------------------------------------------------------
InternalError Traceback (most recent call last)
<ipython-input-4-50b0035545e7> in <module>
----> 1 base_model = keras.applications.resnet.ResNet50(weights='imagenet', include_top=False, input_shape=(224,224,3))
2 print(base_model.summary())
~/local/bin/anaconda3/envs/ml-basics/lib/python3.7/site-packages/tensorflow/python/keras/applications/resnet.py in ResNet50(include_top, weights, input_tensor, input_shape, pooling, classes, **kwargs)
473
474 return ResNet(stack_fn, False, True, 'resnet50', include_top, weights,
--> 475 input_tensor, input_shape, pooling, classes, **kwargs)
476
477
~/local/bin/anaconda3/envs/ml-basics/lib/python3.7/site-packages/tensorflow/python/keras/applications/resnet.py in ResNet(stack_fn, preact, use_bias, model_name, include_top, weights, input_tensor, input_shape, pooling, classes, classifier_activation, **kwargs)
169 x = layers.ZeroPadding2D(
170 padding=((3, 3), (3, 3)), name='conv1_pad')(img_input)
--> 171 x = layers.Conv2D(64, 7, strides=2, use_bias=use_bias, name='conv1_conv')(x)
172
173 if not preact:
~/local/bin/anaconda3/envs/ml-basics/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py in __call__(self, *args, **kwargs)
924 if _in_functional_construction_mode(self, inputs, args, kwargs, input_list):
925 return self._functional_construction_call(inputs, args, kwargs,
--> 926 input_list)
927
928 # Maintains info about the `Layer.call` stack.
~/local/bin/anaconda3/envs/ml-basics/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py in _functional_construction_call(self, inputs, args, kwargs, input_list)
1096 # Build layer if applicable (if the `build` method has been
1097 # overridden).
-> 1098 self._maybe_build(inputs)
1099 cast_inputs = self._maybe_cast_inputs(inputs, input_list)
1100
~/local/bin/anaconda3/envs/ml-basics/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py in _maybe_build(self, inputs)
2641 # operations.
2642 with tf_utils.maybe_init_scope(self):
-> 2643 self.build(input_shapes) # pylint:disable=not-callable
2644 # We must set also ensure that the layer is marked as built, and the build
2645 # shape is stored since user defined build functions may not be calling
~/local/bin/anaconda3/envs/ml-basics/lib/python3.7/site-packages/tensorflow/python/keras/layers/convolutional.py in build(self, input_shape)
202 constraint=self.kernel_constraint,
203 trainable=True,
--> 204 dtype=self.dtype)
205 if self.use_bias:
206 self.bias = self.add_weight(
~/local/bin/anaconda3/envs/ml-basics/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py in add_weight(self, name, shape, dtype, initializer, regularizer, trainable, constraint, partitioner, use_resource, synchronization, aggregation, **kwargs)
612 synchronization=synchronization,
613 aggregation=aggregation,
--> 614 caching_device=caching_device)
615 if regularizer is not None:
616 # TODO(fchollet): in the future, this should be handled at the
~/local/bin/anaconda3/envs/ml-basics/lib/python3.7/site-packages/tensorflow/python/training/tracking/base.py in _add_variable_with_custom_getter(self, name, shape, dtype, initializer, getter, overwrite, **kwargs_for_getter)
748 dtype=dtype,
749 initializer=initializer,
--> 750 **kwargs_for_getter)
751
752 # If we set an initializer and the variable processed it, tracking will not
~/local/bin/anaconda3/envs/ml-basics/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer_utils.py in make_variable(name, shape, dtype, initializer, trainable, caching_device, validate_shape, constraint, use_resource, collections, synchronization, aggregation, partitioner)
143 synchronization=synchronization,
144 aggregation=aggregation,
--> 145 shape=variable_shape if variable_shape else None)
146
147
~/local/bin/anaconda3/envs/ml-basics/lib/python3.7/site-packages/tensorflow/python/ops/variables.py in __call__(cls, *args, **kwargs)
258 def __call__(cls, *args, **kwargs):
259 if cls is VariableV1:
--> 260 return cls._variable_v1_call(*args, **kwargs)
261 elif cls is Variable:
262 return cls._variable_v2_call(*args, **kwargs)
~/local/bin/anaconda3/envs/ml-basics/lib/python3.7/site-packages/tensorflow/python/ops/variables.py in _variable_v1_call(cls, initial_value, trainable, collections, validate_shape, caching_device, name, variable_def, dtype, expected_shape, import_scope, constraint, use_resource, synchronization, aggregation, shape)
219 synchronization=synchronization,
220 aggregation=aggregation,
--> 221 shape=shape)
222
223 def _variable_v2_call(cls,
~/local/bin/anaconda3/envs/ml-basics/lib/python3.7/site-packages/tensorflow/python/ops/variables.py in <lambda>(**kwargs)
197 shape=None):
198 """Call on Variable class. Useful to force the signature."""
--> 199 previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
200 for _, getter in ops.get_default_graph()._variable_creator_stack: # pylint: disable=protected-access
201 previous_getter = _make_getter(getter, previous_getter)
~/local/bin/anaconda3/envs/ml-basics/lib/python3.7/site-packages/tensorflow/python/ops/variable_scope.py in default_variable_creator(next_creator, **kwargs)
2595 synchronization=synchronization,
2596 aggregation=aggregation,
-> 2597 shape=shape)
2598 else:
2599 return variables.RefVariable(
~/local/bin/anaconda3/envs/ml-basics/lib/python3.7/site-packages/tensorflow/python/ops/variables.py in __call__(cls, *args, **kwargs)
262 return cls._variable_v2_call(*args, **kwargs)
263 else:
--> 264 return super(VariableMetaclass, cls).__call__(*args, **kwargs)
265
266
~/local/bin/anaconda3/envs/ml-basics/lib/python3.7/site-packages/tensorflow/python/ops/resource_variable_ops.py in __init__(self, initial_value, trainable, collections, validate_shape, caching_device, name, dtype, variable_def, import_scope, constraint, distribute_strategy, synchronization, aggregation, shape)
1516 aggregation=aggregation,
1517 shape=shape,
-> 1518 distribute_strategy=distribute_strategy)
1519
1520 def _init_from_args(self,
~/local/bin/anaconda3/envs/ml-basics/lib/python3.7/site-packages/tensorflow/python/ops/resource_variable_ops.py in _init_from_args(self, initial_value, trainable, collections, caching_device, name, dtype, constraint, synchronization, aggregation, distribute_strategy, shape)
1649 with ops.name_scope("Initializer"), device_context_manager(None):
1650 initial_value = ops.convert_to_tensor(
-> 1651 initial_value() if init_from_fn else initial_value,
1652 name="initial_value", dtype=dtype)
1653 if shape is not None:
~/local/bin/anaconda3/envs/ml-basics/lib/python3.7/site-packages/tensorflow/python/keras/initializers/initializers_v2.py in __call__(self, shape, dtype)
395 (via `tf.keras.backend.set_floatx(float_dtype)`)
396 """
--> 397 return super(VarianceScaling, self).__call__(shape, dtype=_get_dtype(dtype))
398
399
~/local/bin/anaconda3/envs/ml-basics/lib/python3.7/site-packages/tensorflow/python/ops/init_ops_v2.py in __call__(self, shape, dtype)
559 else:
560 limit = math.sqrt(3.0 * scale)
--> 561 return self._random_generator.random_uniform(shape, -limit, limit, dtype)
562
563 def get_config(self):
~/local/bin/anaconda3/envs/ml-basics/lib/python3.7/site-packages/tensorflow/python/ops/init_ops_v2.py in random_uniform(self, shape, minval, maxval, dtype)
1042 op = random_ops.random_uniform
1043 return op(
-> 1044 shape=shape, minval=minval, maxval=maxval, dtype=dtype, seed=self.seed)
1045
1046 def truncated_normal(self, shape, mean, stddev, dtype):
~/local/bin/anaconda3/envs/ml-basics/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py in wrapper(*args, **kwargs)
199 """Call target, and fall back on dispatchers if there is a TypeError."""
200 try:
--> 201 return target(*args, **kwargs)
202 except (TypeError, ValueError):
203 # Note: convert_to_eager_tensor currently raises a ValueError, not a
~/local/bin/anaconda3/envs/ml-basics/lib/python3.7/site-packages/tensorflow/python/ops/random_ops.py in random_uniform(shape, minval, maxval, dtype, seed, name)
286 maxval = 1
287 with ops.name_scope(name, "random_uniform", [shape, minval, maxval]) as name:
--> 288 shape = tensor_util.shape_tensor(shape)
289 # In case of [0,1) floating results, minval and maxval is unused. We do an
290 # `is` comparison here since this is cheaper than isinstance or __eq__.
~/local/bin/anaconda3/envs/ml-basics/lib/python3.7/site-packages/tensorflow/python/framework/tensor_util.py in shape_tensor(shape)
1027 # not convertible to Tensors because of mixed content.
1028 shape = tuple(map(tensor_shape.dimension_value, shape))
-> 1029 return ops.convert_to_tensor(shape, dtype=dtype, name="shape")
1030
1031
~/local/bin/anaconda3/envs/ml-basics/lib/python3.7/site-packages/tensorflow/python/framework/ops.py in convert_to_tensor(value, dtype, name, as_ref, preferred_dtype, dtype_hint, ctx, accepted_result_types)
1497
1498 if ret is None:
-> 1499 ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
1500
1501 if ret is NotImplemented:
~/local/bin/anaconda3/envs/ml-basics/lib/python3.7/site-packages/tensorflow/python/framework/constant_op.py in _constant_tensor_conversion_function(v, dtype, name, as_ref)
336 as_ref=False):
337 _ = as_ref
--> 338 return constant(v, dtype=dtype, name=name)
339
340
~/local/bin/anaconda3/envs/ml-basics/lib/python3.7/site-packages/tensorflow/python/framework/constant_op.py in constant(value, dtype, shape, name)
262 """
263 return _constant_impl(value, dtype, shape, name, verify_shape=False,
--> 264 allow_broadcast=True)
265
266
~/local/bin/anaconda3/envs/ml-basics/lib/python3.7/site-packages/tensorflow/python/framework/constant_op.py in _constant_impl(value, dtype, shape, name, verify_shape, allow_broadcast)
273 with trace.Trace("tf.constant"):
274 return _constant_eager_impl(ctx, value, dtype, shape, verify_shape)
--> 275 return _constant_eager_impl(ctx, value, dtype, shape, verify_shape)
276
277 g = ops.get_default_graph()
~/local/bin/anaconda3/envs/ml-basics/lib/python3.7/site-packages/tensorflow/python/framework/constant_op.py in _constant_eager_impl(ctx, value, dtype, shape, verify_shape)
298 def _constant_eager_impl(ctx, value, dtype, shape, verify_shape):
299 """Implementation of eager constant."""
--> 300 t = convert_to_eager_tensor(value, ctx, dtype)
301 if shape is None:
302 return t
~/local/bin/anaconda3/envs/ml-basics/lib/python3.7/site-packages/tensorflow/python/framework/constant_op.py in convert_to_eager_tensor(value, ctx, dtype)
95 except AttributeError:
96 dtype = dtypes.as_dtype(dtype).as_datatype_enum
---> 97 ctx.ensure_initialized()
98 return ops.EagerTensor(value, ctx.device_name, dtype)
99
~/local/bin/anaconda3/envs/ml-basics/lib/python3.7/site-packages/tensorflow/python/eager/context.py in ensure_initialized(self)
537 if self._use_tfrt is not None:
538 pywrap_tfe.TFE_ContextOptionsSetTfrt(opts, self._use_tfrt)
--> 539 context_handle = pywrap_tfe.TFE_NewContext(opts)
540 finally:
541 pywrap_tfe.TFE_DeleteContextOptions(opts)
InternalError: CUDA runtime implicit initialization on GPU:0 failed. Status: out of memory
(ml-basics) ray@ray-MS-7B43:~$ conda list
# packages in environment at /home/ray/local/bin/anaconda3/envs/ml-basics:
#
# Name Version Build Channel
_libgcc_mutex 0.1 main
absl-py 0.10.0 pypi_0 pypi
argon2-cffi 20.1.0 pypi_0 pypi
astunparse 1.6.3 pypi_0 pypi
async-generator 1.10 pypi_0 pypi
attrs 20.2.0 pypi_0 pypi
backcall 0.2.0 pypi_0 pypi
bleach 3.2.1 pypi_0 pypi
ca-certificates 2020.7.22 0
cachetools 4.1.1 pypi_0 pypi
certifi 2020.6.20 py37_0
cffi 1.14.3 pypi_0 pypi
chardet 3.0.4 pypi_0 pypi
cycler 0.10.0 pypi_0 pypi
decorator 4.4.2 pypi_0 pypi
defusedxml 0.6.0 pypi_0 pypi
entrypoints 0.3 pypi_0 pypi
future 0.18.2 pypi_0 pypi
gast 0.3.3 pypi_0 pypi
google-auth 1.22.1 pypi_0 pypi
google-auth-oauthlib 0.4.1 pypi_0 pypi
google-pasta 0.2.0 pypi_0 pypi
grpcio 1.32.0 pypi_0 pypi
h5py 2.10.0 pypi_0 pypi
idna 2.10 pypi_0 pypi
imageio 2.9.0 pypi_0 pypi
importlib-metadata 2.0.0 pypi_0 pypi
ipykernel 5.3.4 pypi_0 pypi
ipython 7.18.1 pypi_0 pypi
ipython-genutils 0.2.0 pypi_0 pypi
ipywidgets 7.5.1 pypi_0 pypi
jedi 0.17.2 pypi_0 pypi
jinja2 2.11.2 pypi_0 pypi
joblib 0.17.0 pypi_0 pypi
jsonschema 3.2.0 pypi_0 pypi
jupyter 1.0.0 pypi_0 pypi
jupyter-client 6.1.7 pypi_0 pypi
jupyter-console 6.2.0 pypi_0 pypi
jupyter-core 4.6.3 pypi_0 pypi
jupyterlab-pygments 0.1.2 pypi_0 pypi
keras-preprocessing 1.1.2 pypi_0 pypi
kiwisolver 1.2.0 pypi_0 pypi
ld_impl_linux-64 2.33.1 h53a641e_7
libedit 3.1.20191231 h14c3975_1
libffi 3.3 he6710b0_2
libgcc-ng 9.1.0 hdf63c60_0
libstdcxx-ng 9.1.0 hdf63c60_0
markdown 3.3.1 pypi_0 pypi
markupsafe 1.1.1 pypi_0 pypi
matplotlib 3.3.2 pypi_0 pypi
mistune 0.8.4 pypi_0 pypi
nbclient 0.5.0 pypi_0 pypi
nbconvert 6.0.7 pypi_0 pypi
nbformat 5.0.7 pypi_0 pypi
ncurses 6.2 he6710b0_1
nest-asyncio 1.4.1 pypi_0 pypi
networkx 2.5 pypi_0 pypi
notebook 6.1.4 pypi_0 pypi
oauthlib 3.1.0 pypi_0 pypi
openssl 1.1.1h h7b6447c_0
opt-einsum 3.3.0 pypi_0 pypi
packaging 20.4 pypi_0 pypi
pandas 1.1.3 pypi_0 pypi
pandocfilters 1.4.2 pypi_0 pypi
parso 0.7.1 pypi_0 pypi
pexpect 4.8.0 pypi_0 pypi
pickleshare 0.7.5 pypi_0 pypi
pillow 7.2.0 pypi_0 pypi
pip 20.2.3 py37_0
prometheus-client 0.8.0 pypi_0 pypi
prompt-toolkit 3.0.8 pypi_0 pypi
protobuf 3.13.0 pypi_0 pypi
ptyprocess 0.6.0 pypi_0 pypi
pyasn1 0.4.8 pypi_0 pypi
pyasn1-modules 0.2.8 pypi_0 pypi
pycparser 2.20 pypi_0 pypi
pygments 2.7.1 pypi_0 pypi
pyparsing 2.4.7 pypi_0 pypi
pyrsistent 0.17.3 pypi_0 pypi
python 3.7.9 h7579374_0
python-dateutil 2.8.1 pypi_0 pypi
pytz 2020.1 pypi_0 pypi
pywavelets 1.1.1 pypi_0 pypi
pyzmq 19.0.2 pypi_0 pypi
qtconsole 4.7.7 pypi_0 pypi
qtpy 1.9.0 pypi_0 pypi
readline 8.0 h7b6447c_0
requests 2.24.0 pypi_0 pypi
requests-oauthlib 1.3.0 pypi_0 pypi
rsa 4.6 pypi_0 pypi
scikit-image 0.17.2 pypi_0 pypi
scikit-learn 0.23.2 pypi_0 pypi
scipy 1.5.2 pypi_0 pypi
send2trash 1.5.0 pypi_0 pypi
setuptools 50.3.0 py37hb0f4dca_1
sqlite 3.33.0 h62c20be_0
tensorboard 2.3.0 pypi_0 pypi
tensorboard-plugin-wit 1.7.0 pypi_0 pypi
tensorflow 2.3.1 pypi_0 pypi
tensorflow-estimator 2.3.0 pypi_0 pypi
termcolor 1.1.0 pypi_0 pypi
terminado 0.9.1 pypi_0 pypi
testpath 0.4.4 pypi_0 pypi
threadpoolctl 2.1.0 pypi_0 pypi
tifffile 2020.10.1 pypi_0 pypi
tk 8.6.10 hbc83047_0
torch 1.6.0+cpu pypi_0 pypi
torchvision 0.7.0+cpu pypi_0 pypi
tornado 6.0.4 pypi_0 pypi
traitlets 5.0.4 pypi_0 pypi
urllib3 1.25.10 pypi_0 pypi
wcwidth 0.2.5 pypi_0 pypi
webencodings 0.5.1 pypi_0 pypi
werkzeug 1.0.1 pypi_0 pypi
wheel 0.35.1 py_0
widgetsnbextension 3.5.1 pypi_0 pypi
wrapt 1.12.1 pypi_0 pypi
xz 5.2.5 h7b6447c_0
zipp 3.3.0 pypi_0 pypi
zlib 1.2.11 h7b6447c_3
In https://github.com/MicrosoftDocs/ml-basics/blob/master/02%20-%20Regression.ipynb
you have model = LinearRegression(normalize=False).fit(X_train, y_train)
. I'm curious what the normalize=False
does. Would you mind explaining in the notebook?
The axes on the heatmap in the classification notebook are incorrect. Predicted species should be on the horizontal axis and actual species should be on the vertical axis. The scikit-learn documentation for the confusion matrix states that it returns "Confusion matrix whose i-th row and j-th column entry indicates the number of samples with true label being i-th class and prediced label being j-th class."
I'm a beginner, trying to teach myself machine learning. It is frustrating to the read comments that are just useless english translation the code, which the code already showing. Look at the sample below from the "05b-Convolutional Neural Network (PyTorch).ipynb" file, where I added comments indicating the more meaningful comments would actually provide some meaningful values to help with the learning.
# Create a neural net class
class Net(nn.Module):
# Constructor
def __init__(self, num_classes=3):
super(Net, self).__init__()
# Our images are RGB, so input channels = 3. We'll apply 12 filters in the first convolutional layer
#explain why 12 filters
self.conv1 = nn.Conv2d(in_channels=3, out_channels=12, kernel_size=3, stride=1, padding=1)
# We'll apply max pooling with a kernel size of 2
#explain why using max pooling of kernel size of 2
self.pool = nn.MaxPool2d(kernel_size=2)
# A second convolutional layer takes 12 input channels, and generates 12 outputs
self.conv2 = nn.Conv2d(in_channels=12, out_channels=12, kernel_size=3, stride=1, padding=1)
# A third convolutional layer takes 12 inputs and generates 24 outputs
#explain why gernerates 24 outputs
self.conv3 = nn.Conv2d(in_channels=12, out_channels=24, kernel_size=3, stride=1, padding=1)
# A drop layer deletes 20% of the features to help prevent overfitting
#explain how this help prevent overfitting
self.drop = nn.Dropout2d(p=0.2)
to
Should be
Issue at step - explore the data
NameError Traceback (most recent call last)
in
7
8 # Get the class names
----> 9 classes = os.listdir(data_folder)
10 classes.sort()
11 print(len(classes), 'classes:')
NameError: name 'os' is not defined
Should be moved up a few blocks
As of the console output, Keras has no attribute called "version".
Keras has been integrated into Tensorflow and when imported from tensorflow they have the same version.
At least according to StackOverflow:
https://stackoverflow.com/questions/73934025/attributeerror-module-keras-has-no-attribute-version
The resolutions of this issue would consist of deleting one line of code in https://github.com/MicrosoftDocs/ml-basics/blob/master/05a%20-%20Deep%20Neural%20Networks%20(TensorFlow).ipynb right after importing TF.
import tensorflow
from tensorflow import keras
from tensorflow.keras import models
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras import utils
from tensorflow.keras import optimizers
# Set random seed for reproducability
tensorflow.random.set_seed(0)
print("Libraries imported.")
print('Keras version:',keras.__version__) #This line has to go
print('TensorFlow version:',tensorflow.__version__)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.