Giter Site home page Giter Site logo

haxsaw / hikaru Goto Github PK

View Code? Open in Web Editor NEW
202.0 6.0 18.0 9.34 MB

Move smoothly between Kubernetes YAML and Python for creating/updating/componentizing configurations.

License: MIT License

Python 99.97% Shell 0.03%
python-kubernetes yaml yaml-parser yaml-files yaml-processor python3 python-library kubernetes kubernetes-api

hikaru's Introduction

Hikaru

Version 1.3.0

travis GitHub license   :target: https://github.com/haxsaw/hikaru/blob/main/LICENSE PyPI - Python Version coverage

Try it: see Hikaru convert your K8s YAML

Release notes

Full documentation at Read the Docs

Hikaru is a collection of tools that allow you to work with Kubernetes resources from within Python in a variety of ways:

  • Hikaru provides type-annotated classes that model all of the Kubernetes resources in Python and supports CRUD operations on those classes to manage their lifecycle in your Kubernetes cluster.
  • Hikaru also provides tooling to shift formats for these objects, allowing you to turn K8s YAML into Python objects, JSON, or Python dicts, and vice-versa. It can also generate Python source code for K8s objects loaded from non-Python sources.
  • Hikaru also supports a number of features that aid in the management of your objects such as searching for specific fields or diffing two instances of a K8s resource.
  • Hikaru includes support for creating 'watches' on your objects, providing a means to monitor events on your provisioned K8s resources.
  • Hikaru provides support for creation of CRDs which support all the above features such as CRUD operations and watches.
  • Finally, Hikaru includes a facility to specify a collection of resources as an 'application', similar in spirit to a Helm chart, and provides the same CRUD, watch, and management capabilities on the entire application as it does on single resource objects (full format shifting support to come).

This hikaru package is a meta-package that has as dependencies:

  • hikaru-core for core Hikaru capabilities
  • hikaru-codegen for generating formatted Python source from Hikaru objects
  • The four most recent hikaru-model-* packages, currently:
    • 25.x
    • 26.x
    • 27.x
    • 28.x

See each package's specific PyPI page for any details on that package.

While you can continue to use this meta-package as before, we suggest you migrate to using the model versions that match the version of the Kubernetes API you need to interact with. This package just helps make that smoother.

From Python

Hikaru uses type-annotated Python dataclasses to represent each of the kinds of objects defined in the Kubernetes API, so when used with an IDE that understands Python type annotations, Hikaru enables the IDE to provide the user direct assistance as to what parameters are available, what type each parameter must be, and which parameters are optional. Assembled Hikaru object can be rendered into YAML that can be processed by regular Kubernetes tools.

From YAML

But you don’t have to start with authoring Python: you can use Hikaru to parse Kubernetes YAML into these same Python objects, at which point you can inspect the created objects, modify them and re-generate new YAML, or even have Hikaru emit Python source code that will re-create the same structure but from the Python interface.

From JSON

You can also process JSON or Python dict representations of Kubernetes configs into the corresponding Python objects

To YAML, Python, or JSON

Hikaru can output a Python Kubernetes object as Python source code, YAML, JSON, or a Python dict, and go back to any of these representations, allowing you to shift easily between representational formats for various purposes.

Supports multiple versions of Kubernetes

Hikaru allows you to use multiple releases of the Kubernetes client, providing appropriate bindings/methods/attributes for every object in each version of a release.

Direct Kubernetes via CRUD or low-level methods

You can use Hikaru objects to interact with a Kubernetes system. Hikaru wraps the Kubernetes Python client and maps API operations on to the Hikaru model they involve. For example, you can now create a Pod directly from a Pod object. Hikaru supports a higher-level CRUD-style set of methods as well as all the operations defined in the Swagger API specification.

Hikaru can work with any Kubernetes-compliant system such as K3s and minikube.

Monitor Kubernetes activities with watches

Hikaru provides an abstraction on over the K8s watch APIs, which allow you to easily create code that receives events for all activites carried out in your K8s cluster on a per-kind basis. Or, you can create a watch container that multiplexes the output from individual watches into a single stream.

Create Custom Resource Definitions

As of release 1.0.0, Hikaru supports the creation of CRDs that integrate with the rest of Hikaru. Automatically generate schema from a Hikaru class, define CRDs to Kubernetes, manage CRD instances with CRUD methods, and create watchers that allow you to build your own controllers for your CRDs.

Model entire applications

Hikaru provides an Application base class that can be derived from to allow the creation of sets of resources that comprise the infra for an application system. This Application model can be inspected, provisioned into a cluster, read from a cluster, and watched, just like with individual Hikaru K8s objects.

Integrate your own subclasses

You can not only in create your own subclasses of Hikaru document classes for your own use, but you can also register these classes with Hikaru and it will make instances of your classes when Hikaru encounters the relevant apiVersion and kind values. So for example, you can create your own MyPod subclass of Pod, and Hikaru will instantiate your subclass when reading Pod YAML.

Alternative to templating for customisation

Using Hikaru, you can assemble Kubernetes objects using previously defined libraries of objects in Python, craft replacements procedurally, or even tweak the values of an existing object and turn it back into YAML.

Build models for uses other than controlling systems

You can use Hikaru in the process of issuing instructions to Kubernetes, but the same Hikaru models can be used as high-fidelity replicas of the YAML for other processes as well.

Type checking, diffing, merging, and inspection

Hikaru supports a number of other operations on the Python objects it defines. For example, you can check the types of all attributes in a config against the defined types for each attribute, you can diff two configs to see where they aren't the same, and you can search through a config for specific values and contained objects.

API coverage

Hikaru supports all objects in the OpenAPI swagger spec for the Kubernetes API v 1.26, and has initial support for methods on those objects from the same swagger spec. Additionally, it defines some higher-level CRUD-style methods on top of these foundation methods.

Usage examples

To create Python objects from a Kubernetes YAML source, use load_full_yaml():

from hikaru import load_full_yaml  # or just 'from hikaru import *'

docs = load_full_yaml(stream=open("test.yaml", "r"))
p = docs[0]

load_full_yaml() loads every Kubernetes YAML document in a YAML file and returns a list of the resulting Hikaru objects found. You can then use the YAML property names to navigate the resulting object. If you assert that an object is of a known object type, your IDE can provide you assistance in navigation:

from hikaru.model.rel_1_16 import Pod
assert isinstance(p, Pod)
print(p.metadata.labels["lab2"])
print(p.spec.containers[0].ports[0].containerPort)
for k, v in p.metadata.labels.items():
    print(f"key:{k} value:{v}")

You can create Hikaru representations of Kubernetes objects in Python:

from hikaru.model.rel_1_16 import Pod, PodSpec, Container, ObjectMeta
x = Pod(apiVersion='v1', kind='Pod',
        metadata=ObjectMeta(name='hello-kiamol-3'),
        spec=PodSpec(
            containers=[Container(name='web', image='kiamol/ch02-hello-kiamol') ]
             )
    )

…and then render it in YAML:

from hikaru import get_yaml
print(get_yaml(x))

…which yields:

---
apiVersion: v1
kind: Pod
metadata:
  name: hello-kiamol-3
spec:
  containers:
    - name: web
      image: kiamol/ch02-hello-kiamol

If you use Hikaru to parse this back in as Python objects, you can then ask Hikaru to output Python source code that will re-create it (thus providing a migration path):

from hikaru import get_python_source, load_full_yaml
docs = load_full_yaml(path="to/the/above.yaml")
print(get_python_source(docs[0], assign_to='x', style="black"))

...which results in:

x = Pod(
    apiVersion="v1",
    kind="Pod",
    metadata=ObjectMeta(name="hello-kiamol-3"),
    spec=PodSpec(containers=[Container(name="web", image="kiamol/ch02-hello-kiamol")]),
)

...and then turn it into real Kubernetes resources using the CRUD methods:

x.create(namespace='my-namespace')

...or read an existing object back in:

p = Pod().read(name='hello-kiamol-3', namespace='my-namespace')

...or use a Hikaru object as a context manager to automatically perform updates:

with Pod().read(name='hello-kiamol-3', namespace='my-namespace') as p:
        p.metadata.labels["new-label"] = 'some-value'
        # and other changes

# when the 'with' ends, the context manager sends an update()

It is entirely possible to load YAML into Python, tailor it, and then send it back to YAML; Hikaru can round-trip YAML through Python and then back to the equivalent YAML.

The pieces of complex objects can be created separately and even stored in a standard components library module for assembly later, or returned as the value of a factory function, as opposed to using a templating system to piece text files together:

from component_lib import web_container, lb_container
from hikaru.model.rel_1_16 import Pod, ObjectMeta, PodSpec
# make an ObjectMeta instance here called "om"
p = Pod(apiVersion="v1", kind="Pod",
        metadata=om,
        spec=PodSpec(containers=[web_container, lb_container])
        )

You can also transform Hikaru objects into Python dicts:

from pprint import pprint
pprint(get_clean_dict(x))

...which yields:

{'apiVersion': 'v1',
 'kind': 'Pod',
 'metadata': {'name': 'hello-kiamol-3'},
 'spec': {'containers': [{'image': 'kiamol/ch02-hello-kiamol', 'name': 'web'}]}}

...and go back into Hikaru objects. You can also render Hikaru objects as JSON:

from hikaru import *
print(get_json(x))

...which outputs the similar:

{"apiVersion": "v1", "kind": "Pod", "metadata": {"name": "hello-kiamol-3"}, "spec": {"containers": [{"name": "web", "image": "kiamol/ch02-hello-kiamol"}]}}

Hikaru lets you go from JSON back to Hikaru objects as well.

Hikaru objects can be tested for equivalence with ‘==’, and you can also easily create deep copies of entire object structures with dup(). This latter is useful in cases where you have a component that you want to use multiple times in a model but need it slightly tweaked in each use; a shared instance can’t have different values at each use, so it’s easy to make a copy that can be customised in isolation.

Finally, every Hikaru object that holds other properties and objects have methods that allow you to search the entire collection of objects. This lets you find various objects of interest for review and checking against policies and conventions. For example, if we had a Pod ‘p’ that was pulled in with load_full_yaml(), we could examine all of the Container objects with:

containers = p.find_by_name("containers")
for c in containers:
    # check what you want...

Or you can get all of the ExecAction object (the value of ‘exec’ properties) that are part the second container’s lifecycle’s httpGet property like so:

execs = p.find_by_name("exec", following='containers.1.lifecycle.httpGet')

These queries result in a list of CatalogEntry objects, which are named tuples that provide the path to the found element. You can acquire the actual element for inspection with the object_at_path() method:

o = p.object_at_path(execs[0].path)

This makes it easy to scan for specific items in a config under automated control.

Future work

With basic support of managing Kubernetes resources in place, other directions are being considered such as event/watch support and bringing in support for additional releases of Kubernetes.

About

Hikaru is Mr. Sulu’s first name, a famed fictional helmsman.

hikaru's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

hikaru's Issues

Node from_dict fails on required field port (in DeamonEndpoint)

While trying to convert k8s Node event into hikaru Node object, I get the below error.
The node yaml contains:
daemonEndpoints:
kubeletEndpoint:
Port: 10250

To reproduce:
obj = hikaru.from_dict(some_node_dict, cls=Node)

Error:
obj = hikaru.from_dict(k8s_payload.obj, cls=model_class)
File "/usr/local/lib/python3.8/site-packages/hikaru/generate.py", line 243, in from_dict
doc = cls.from_yaml(yaml, translate=translate)
File "/usr/local/lib/python3.8/site-packages/hikaru/meta.py", line 455, in from_yaml
inst.process(yaml, translate=translate)
File "/usr/local/lib/python3.8/site-packages/hikaru/meta.py", line 832, in process
obj.process(val, translate=translate)
File "/usr/local/lib/python3.8/site-packages/hikaru/meta.py", line 832, in process
obj.process(val, translate=translate)
File "/usr/local/lib/python3.8/site-packages/hikaru/meta.py", line 832, in process
obj.process(val, translate=translate)
File "/usr/local/lib/python3.8/site-packages/hikaru/meta.py", line 816, in process
raise TypeError(f"{self.class.name} is missing {k8s_name}"
TypeError: DaemonEndpoint is missing port (originally port)

Full correctness checking of objects

Hi,
Do You plan somewhere in future to provide full object checking?
It would be awesome to check if created object fulfil all the criteria, like max 64 characters in pod name etc.

Something similar as kubectl apply --dry-run to validate yamls.

Do you support openshift?

Do you plan to support OpenShift?
If not please ignore or delete this issue.
If you do then I have converted the openshift swagger file to version 3 of the open API but it fails the build process with an error;
Traceback (most recent call last):
File "/usr/local/Cellar/[email protected]/3.9.4/Frameworks/Python.framework/Versions/3.9/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/local/Cellar/[email protected]/3.9.4/Frameworks/Python.framework/Versions/3.9/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/Users/anthcp/code/hikaru-oc/hikaru/build.py", line 556, in
build_it(sys.argv[1])
File "/Users/anthcp/code/hikaru-oc/hikaru/build.py", line 547, in build_it
load_stable(swagger_file)
File "/Users/anthcp/code/hikaru-oc/hikaru/build.py", line 218, in load_stable
for k, v in d["definitions"].items():
KeyError: 'definitions'

I have attached the swagger json file...
openapi-v3.json.gz

List methods aren't assigned to classes consistently

In the course of researching support for exposing K8s 'watch' functionality, it was discovered that the 'list' operations are not being consistently attached to classes as methods. For example, listNamespacedJob() is a method on the JobList class, but listJobForAllNamespaces() is a method on the Job class. The intent was that methods that return a list be attached to the list class themselves, not the class that are items in the list. For the sake of consistency, new models are needed that establish these methods on the proper class, and any changes that are made must be documented to aid anyone who has to alter code due to this method refactor.

Unpin Black version

"Black" version < 22.3.0 has issues and conflicts with "click" >= 8.1.0. Thus the version range that is pinned is causing conflicts in many of our projects.

Additionally, "Black" is a code quality tool - thus is should be specified in "extras_require" section, not the main "install_requires" - since it is not actually needed to install the package externally.

I recommend to create a "dev" section under "extras_require" and pin "black" there if needed - or unpin it alltogether.

Add a 'merge()' method to HikaruBase

HikaruBase would benefit from the addition of a 'merge()' method that would take data from another instance of the same Hikaru class and merge it into self. You should be able to select whether you want None values from the other to be put into self (full overwrite) or to just copy over non-None values.

PodStatus's podIP and hostIP attributes are not set on readNamespacedPod(...)

I am using hikaru.model.rel_1_26.v1

The values pod.status.podIP and pod.status.hostIP returned by pod = Pod.readNamespacedPod() or pod.read() are always None when the pod is actively running in a cluster.

I believe the issue occurs here:

hikaru/hikaru/meta.py

Lines 988 to 992 in d1a00be

for f in fields(self.__class__):
k8s_name = f.name.strip("_")
k8s_name = (h2kc_translate(self.__class__, k8s_name)
if translate
else k8s_name)

The pod status JSON from underlying Kubernetes call has the keys pod_ip and host_ip, while the field names in PodStatus are podIP and hostIP. It looks like the code in the link tries to find the key camel_to_pep8(class_field_name) in the JSON response, but camel_to_pep8(podIP) and camel_to_pep8(hostIP) return pod_i_p and host_i_p

Support dictionary lookups in object_at_path

The following test doesn't pass because object_at_path can't traverse dictionaries. I think it is useful to support and can open a PR for this if you like.

def test122():
    """
    test that you can run object_at_path on the path returned by diff()
    """
    pod = Pod(
        spec=PodSpec(
            containers=[
                Container(name="a", resources=ResourceRequirements(limits={"foo": "bar"}))
            ]
        )
    )
    pod2 = copy.deepcopy(pod)
    pod2.spec.containers[0].resources.limits["foo"] = "blah"
    diff = pod.diff(pod2)
    print(diff)
    print("object at path is", pod.object_at_path(diff[0].path))

Support for Argo Workflows?

Hi There,

Based on how Hikaru is using OpenAPI/Swagger specs to generate types, would you consider supporting Argo Workflows? (Swagger 2.0 Spec).

If not, is there any way to generate my own types and use them with Hikaru?

Thanks!

obj.metadata.selfLink is always None

Example:

from hikaru.model.rel_1_16 import *
d = Deployment(metadata=ObjectMeta(name='somename', namespace='somens')).read()
print(d.metadata.selfLink)

I tested this on the latest released version of Hikaru. Let me know if you need more information to reproduce.

As a sidenote, thank you very much for the work you did on the CRUD API. We've updated most of our code to use Hikaru's CRUD methods instead of directly using the K8s Python client. We're very happy with the API.

Automatically setting the `kind` field

Right now you create a pod like this:

Pod(apiVersion='v1', kind='Pod',
        metadata=ObjectMeta(name='hello-kiamol-3'),
        spec=PodSpec(
            containers=[Container(name='web', image='kiamol/ch02-hello-kiamol') ]
             )
    )

Is there a use-case where you would want to give the kind field a value that doesn't match the name of the type itself? Can this field be automatically set?

deserialize/serialize yaml preserving comments and formatting

test1 = hikaru.load_full_yaml(path='./hikaru_test/1.yaml')
print(hikaru.get_yaml(test1[0]))

1.yaml

---
apiVersion: v1
kind: Pod
metadata:
  name: hello-kiamol-3
spec:
  containers:
  # test_comment
    - name: web # test_comment_2
      image: kiamol/ch02-hello-kiamol

output

---
apiVersion: v1
kind: Pod
metadata: {name: hello-kiamol-3}
spec:
  containers:
  - {image: kiamol/ch02-hello-kiamol, name: web}

Is it possible to preserve comments and formattion (using ruamel.yaml CommentedMap or something similar)? The use case here is quite straightforward - to be able to load, modify and dump k8s objects back to their original storage (which is yaml file in git)

Errors importing Pod from hikaru on latest version

Hi @haxsaw,
I'm having some trouble importing the latest version of hikaru. I've tested on python3.9 and python3.11, both on an M1 Mac.

On python3.11:

$  python3.11 -m pip install --upgrade hikaru
Requirement already satisfied: hikaru in /opt/homebrew/lib/python3.9/site-packages (1.3.0)
Requirement already satisfied: hikaru-model-25>=1.1.0 in /opt/homebrew/lib/python3.9/site-packages (from hikaru) (1.1.1)
Requirement already satisfied: hikaru-model-26>=1.1.0 in /opt/homebrew/lib/python3.9/site-packages (from hikaru) (1.1.1)
Requirement already satisfied: hikaru-model-27>=1.1.0 in /opt/homebrew/lib/python3.9/site-packages (from hikaru) (1.1.1)
Requirement already satisfied: hikaru-model-28>=1.1.0 in /opt/homebrew/lib/python3.9/site-packages (from hikaru) (1.1.0)
Requirement already satisfied: hikaru-codegen>=1.1.0 in /opt/homebrew/lib/python3.9/site-packages (from hikaru) (1.1.0)
Requirement already satisfied: hikaru-core>=1.1.0 in /opt/homebrew/lib/python3.9/site-packages (from hikaru-codegen>=1.1.0->hikaru) (1.1.1)
...

$ python3.11
Python 3.11.6 (main, Nov  2 2023, 04:39:43) [Clang 14.0.3 (clang-1403.0.22.14.1)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from hikaru.model.rel_1_25 import Pod, ObjectMeta, PodSpec
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ImportError: cannot import name 'Pod' from 'hikaru.model.rel_1_25' (unknown location)

On python3.9:

$ python3.9 -m pip install --upgrade hikaru
Requirement already satisfied: hikaru in /opt/homebrew/lib/python3.9/site-packages (1.3.0)
Requirement already satisfied: hikaru-model-25>=1.1.0 in /opt/homebrew/lib/python3.9/site-packages (from hikaru) (1.1.1)
Requirement already satisfied: hikaru-model-26>=1.1.0 in /opt/homebrew/lib/python3.9/site-packages (from hikaru) (1.1.1)
Requirement already satisfied: hikaru-model-27>=1.1.0 in /opt/homebrew/lib/python3.9/site-packages (from hikaru) (1.1.1)
Requirement already satisfied: hikaru-model-28>=1.1.0 in /opt/homebrew/lib/python3.9/site-packages (from hikaru) (1.1.0)
Requirement already satisfied: hikaru-codegen>=1.1.0 in /opt/homebrew/lib/python3.9/site-packages (from hikaru) (1.1.0)
Requirement already satisfied: hikaru-core>=1.1.0 in /opt/homebrew/lib/python3.9/site-packages (from hikaru-codegen>=1.1.0->hikaru) (1.1.1)
...

$ python3.9
>>> from hikaru.model.rel_1_25 import Pod, ObjectMeta, PodSpec
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/opt/homebrew/lib/python3.9/site-packages/hikaru/model/rel_1_25/__init__.py", line 33, in <module>
    from .deprecations import *
  File "/opt/homebrew/lib/python3.9/site-packages/hikaru/model/rel_1_25/deprecations.py", line 1, in <module>
    from hikaru.generate import add_deprecations_for_release
ModuleNotFoundError: No module named 'hikaru.generate'

Any ideas?

Missing fields in Pod.spec.containers[*].livenessProbe

Hikaru doesn't recognize the field Pod.spec.containers[*].livenessProbe.exec.

How to reproduce:

$ kubectl apply -f https://raw.githubusercontent.com/robusta-dev/kubernetes-demos/main/liveness_probe_fail/failing_liveness_probe.yaml

$ python3.9
>>> from kubernetes import client, config
>>> import hikaru
>>> from hikaru import *
>>> from hikaru.model.rel_1_25 import *
>>> config.load_kube_config()
>>> p = Pod().read(name='order-processor', namespace='default')
>>> print(p.spec.containers[0].livenessProbe)
Probe(exec=None, failureThreshold=1000, grpc=None, httpGet=None, initialDelaySeconds=5, periodSeconds=5, successThreshold=1, tcpSocket=None, terminationGracePeriodSeconds=None, timeoutSeconds=1)

As you can see, the exec field is missing from the liveness probe in Hikaru.

Here is the complete code to reproduce for convenience:

from kubernetes import client, config
import hikaru
from hikaru import *
from hikaru.model.rel_1_25 import *

config.load_kube_config()

p = Pod().read(name='order-processor', namespace='default')
print(p.spec.containers[0].livenessProbe)

test74 failing on master

When running basic_tests.py I get the following error. All other tests pass successfully:

test74 failed with should have gotten a ValueError, <class 'AssertionError'>

Process finished with exit code 0

Edge cases with diffs

Hey! I'm diffing two objects and getting a Type mismatch when the field is None in one object and has a value in the other. I think it would make more sense if this was a Value mismatch and not a type mismatch.

Example:

[DiffDetail(cls=<class 'hikaru.model.v1.ObjectMeta'>, attrname='deletionGracePeriodSeconds', path=['metadata', 'deletionGracePeriodSeconds'], report="Type mismatch:self.deletionGracePeriodSeconds is a <class 'int'> but other's is a <class 'NoneType'>"),

DiffDetail(cls=<class 'hikaru.model.v1.ObjectMeta'>, attrname='deletionTimestamp', path=['metadata', 'deletionTimestamp'], report="Type mismatch:self.deletionTimestamp is a <class 'str'> but other's is a <class 'NoneType'>"),

Any thoughts?

Import errors

Error

On my local machine, the following fails:

from hikaru.model.rel_1_26 import ContainerStatus, Pod

with a strange error:

ImportError: cannot import name 'ContainerStatus' from 'hikaru.model.rel_1_26' (/Users/natanyellin/Library/Caches/pypoetry/virtualenvs/robusta-cli-k1xMhET--py3.9/lib/python3.9/site-packages/hikaru/model/rel_1_26/__init__.py)

Of course, ContainerStatus does in fact exist in the right place.

Investigation

If I patch site-packages/hikaru/model/rel_1_26/v1/__init__.py as follows:

try:
    from .v1 import *
except ImportError as e:  # pragma: no cover
    print("Whoops", e)
    pass

We discover the real cause:

Whoops cannot import name 'CertificatesV1Api' from 'kubernetes.client' (/Users/natanyellin/Library/Caches/pypoetry/virtualenvs/robusta-cli-k1xMhET--py3.9/lib/python3.9/site-packages/kubernetes/client/__init__.py)

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ImportError: cannot import name 'ContainerStatus' from 'hikaru.model.rel_1_26' (/Users/natanyellin/Library/Caches/pypoetry/virtualenvs/robusta-cli-k1xMhET--py3.9/lib/python3.9/site-packages/hikaru/model/rel_1_26/__init__.py)

hikaru is trying to import CertificatesV1Api which doesn't exist in the minimum version of kubernetes-client that hikaru requires. (All of hikaru's python dependencies are satisfied on my machine, although some libraries like kubernetes-client are running a compatible but not latest version.)

If I update my local kubernetes-client version to the latest this problem resolves itself.

Suggestions

  1. Update the minimum kubernetes-client version (not sure what's the real minimum, just know that latest works for me)
  2. Maybe don't do a blanket pass on ImportErrors? It makes troubleshooting stuff like this a little hard.

Update tests for newer version of pytest

Hikaru tests were originally written using nose but shifted to pytest due to pytest supporting nose-style tests. However, recent environment rebuilds reveal that nose support has been removed from pytest and now hikaru's tests no longer work properly unless run by an old version of pytest. The tests need to be updated to allow them to be run by current versions of pytest.

Error when trying to patch/replace an object

Thanks for the new version. It's great!

When trying yo patch/replace an existing Kubernetes object (configmap/deployment etc), the api returns an error regarding the creationTimestamp field (see the error below)
Removing this field from the generated 'clean_dict' before saving solved it, but I'm not sure this is the correct solution

Code sample:
dep: Deployment = Deployment.readNamespacedDeployment("my-deployment", "default").obj
dep.spec.replicas += 1
dep.patchNamespacedDeployment("my-deployment", "default")

error:
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"ConfigMap in version "v1" cannot be handled as a ConfigMap: v1.ConfigMap.Data: ObjectMeta: v1.ObjectMeta.UID: SelfLink: ResourceVersion: Namespace: Name: CreationTimestamp: unmarshalerDecoder: parsing time "2021-05-15T12:53:45" as "2006-01-02T15:04:05Z07:00": cannot parse "" as "Z07:00", error found in #10 byte of ...|T12:53:45", "name": |..., bigger context ...|data": {"creationTimestamp": "2021-05-15T12:53:45", "name": "jobs-states", "namespace": "robusta", "|...","reason":"BadRequest","code":400}

stack trace:
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/hikaru/model/rel_1_16/v1/v1.py", line 12134, in replaceNamespacedConfigMap
result = the_method(**all_args)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/kubernetes/client/api/core_v1_api.py", line 25568, in replace_namespaced_config_map_with_http_info
return self.api_client.call_api(
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 348, in call_api
return self.__call_api(resource_path, method,
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 180, in __call_api
response_data = self.request(
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 399, in request
return self.rest_client.PUT(url,
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/kubernetes/client/rest.py", line 284, in PUT
return self.request("PUT", url,
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/kubernetes/client/rest.py", line 233, in request
raise ApiException(http_resp=r)
kubernetes.client.exceptions.ApiException: (400)
Reason: Bad Request

support for auto-generation

I just came across Hikaru and am very interested. We're currently using cdk8s and are not happy with it, so looking for an alternative.

Our gripes with cdk8s aside: one thing it does very well is supporting crds and any kubernetes version. It does this by allowing users to "import" the spec from a cluster (https://cdk8s.io/docs/latest/cli/import/#crds). As far as I can tell, Hikaru relies on auto-generated code based on inspecting Kubernetes' openapi spec. I'm wondering if maintainers have considered opening up this code generation process so that users can "compile" their own hikaru module and thereby represent their own cluster versions and/or crds?

Hikaru functions should return a specific type

Hi Tom, my name is Tal and I work with @aantn in Robusta, where we use Hikaru as our kubernetes library.

It would be super nice if the functions exported by Hikaru will return a specific type rather than the general "Response" type.

For example, rather than writing:

from hikaru.model.rel_1_16.v1 import EventList
event_list: EventList = EventList.listNamespacedEvent("default").obj

It would be nice if one could simply write:

from hikaru.model.rel_1_16.v1 import EventList
event_list = EventList.listNamespacedEvent("default").obj

And still get the right type (EventList in this case), rather than the Optional[Any] type of Response.obj

(One possible implementation: rather than one Response type, Hikaru can maintain multiple Response[TYPE] types - for example Response[EventList], Response[Pod] etc. - and return the correct one from every function it exports)

Thanks :)

Black maybe better be a dev dependency

get_python_source is the only place used black.format_file_contents, black.FileMode but this function is not for real world usage. it is better to move it to another file and black to dev requirements, so if any user need to use it in development envionment, he could install in seperately

CRUD-style support

Add CRUD-style methods to those generated on HikaruDocumentBase subclasses. There may be cases where CRUD-like instance and static methods should be added; we need to consider what it means to issue them on either (read() on an instance vs read() on the class, for example).

Kubernetes 409 when updating CRD with context manager

Hello! I'm having an issue when using the context manager with a CRD. I have the following code where I have got a new version of a CRD created, the code then reads the new_crd CRD and makes some updates and then calls the update() CRUD method. When I run this the update actually works, but an exception is caught when the k8s API throws a 409.

try:
    with rollback_cm(MyCRD(metadata=ObjectMeta(name=new_crd.metadata.name)).read()) as obj:
        obj.spec = new_crd.spec
        obj.metadata.annotations = new_crd.metadata.annotations
        obj.status = {"status": "updated"}
        obj.update(dry_run=dry_run_directive)
except Exception:
    # always 409 

In the logs I see:

[2024-03-05 19:54:12,783] kubernetes.client.re [DEBUG   ] response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Operation cannot be fulfilled on mycrd.app.com \"my-app-0
\": the object has been modified; please apply your changes to the latest version and try again","reason":"Conflict","details":{"name":"my-app-0","group":"app.com","kind":"mycrd"},"code":409}

But like I said, the change has been applied. Is this something anyone has seen before?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.