Giter Site home page Giter Site logo

cerbos / cerbos-sdk-python Goto Github PK

View Code? Open in Web Editor NEW
12.0 12.0 8.0 299 KB

Cerbos Python SDK

Home Page: https://cerbos.dev

License: Apache License 2.0

Python 99.45% Batchfile 0.01% Shell 0.54%
access-control api-client authorization authz library python python3 security

cerbos-sdk-python's Introduction

GitHub release (latest SemVer) Snapshots Go Report Card Contributor Covenant

Cerbos

What is Cerbos?

Cerbos is an authorization layer that evolves with your product. It enables you to define powerful, context-aware access control rules for your application resources in simple, intuitive YAML policies; managed and deployed via your Git-ops infrastructure. It provides highly available APIs to make simple requests to evaluate policies and make dynamic access decisions for your application.

This repo has everything you need to set up a self-hosted Cerbos Policy Decision Point (PDP). Sign up for a free Cerbos Hub account to streamline your policy authoring and distribution workflow to self-hosted PDPs.

With Cerbos Hub you can:

  • Collaborate with colleagues to author and share policies in fully-interactive private playgrounds
  • Quickly and efficiently distribute policy updates to your whole PDP fleet
  • Build special policy bundles for client-side or in-browser authorization
  • Easily integrate with Cerbos in serverless and edge deployments

Key concepts, at a glance ๐Ÿ‘€

PRINCIPAL: oftentimes just the "user", but can also represent: other applications, services, bots or anything you can think of. The "thing" that's trying to carry out an... โ†™๏ธ

ACTION: a specific task. Whether to create, view, update, delete, acknowledge, approve... anything. The principal might have permission to do all actions or maybe just one or two. The actions are carried out on a... โ†™๏ธ

RESOURCE: the thing you're controlling access to. It could be anything, e.g., in an expense management system; reports, receipts, card details, payment records, etc. You define resources in Cerbos by writing... โ†™๏ธ

POLICIES: YAML files where you define the access rules for each resource, following a simple, structured format. Stored either: on disk, in cloud object stores, git repos, or dynamically in supported databases. These are continually monitored by the... โ†™๏ธ

CERBOS PDP: the Policy Decision Point: the stateless service where policies are executed and decisions are made. This runs as a separate process in kube (as a service or a sidecar), directly as a systemd service or as an AWS Lambda function. Once deployed, the PDP provides two primary APIs...

  • CheckResources: "Can this principal access this resource?"
  • PlanResources: "Which of resource kind=X can this principal access?"

These APIs can be called via cURL, or in production via one of our many... โ†™๏ธ

SDKs: you can see the list here. There are also a growing number of query plan adapters to convert the SDK PlanResources responses to a convenient query instance.

RBAC -> ABAC: If simple RBAC doesn't cut it, you can extend the decision-making by implementing attribute based rules. Implement conditions in your resource policies which are evaluated dynamically at runtime using contextual data, for much more granular control. Add conditions in derived roles to extend the RBAC roles dynamically. Or use principal policies for more particular overrides for a specific user.

CERBOS HUB: A cloud-hosted control plane to streamline your Cerbos PDP deployment. Includes a comprehensive CI/CD solution for testing and distributing policy updates securely and efficiently, collaborative private playgrounds for quick prototyping and experimentation, and an exclusive Embedded PDP solution for deploying your policies to browsers and serverless/edge applications.

How Cerbos PDP works with your application:

Cerbos

Learn more about how Cerbos PDP and Cerobs Hub work together to solve your authorization headaches here.

Learn more

Used by

Cerbos is popular among large and small organizations:

Cerbos

Using Cerbos? Let us know by emailing [email protected].

Installation

Examples

Resource policy

Write access rules for a resource.

---
apiVersion: api.cerbos.dev/v1
resourcePolicy:
  importDerivedRoles:
    - common_roles
  resource: "album:object"
  version: "default"
  rules:
    - actions: ['*']
      effect: EFFECT_ALLOW
      derivedRoles:
        - owner

    - actions: ['view', 'flag']
      effect: EFFECT_ALLOW
      roles:
        - user
      condition:
        match:
          expr: request.resource.attr.public == true

    - actions: ['view', 'delete']
      effect: EFFECT_ALLOW
      derivedRoles:
        - abuse_moderator

Derived roles

Dynamically assign new roles to users based on contextual data.

---
apiVersion: "api.cerbos.dev/v1"
derivedRoles:
  name: common_roles
  definitions:
    - name: owner
      parentRoles: ["user"]
      condition:
        match:
          expr: request.resource.attr.owner == request.principal.id

    - name: abuse_moderator
      parentRoles: ["moderator"]
      condition:
        match:
          expr: request.resource.attr.flagged == true

API request

cat <<EOF | curl --silent "http://localhost:3592/api/check/resources?pretty" -d @-
{
  "requestId": "test01",
  "includeMeta": true,
  "principal": {
    "id": "alicia",
    "roles": [
      "user"
    ]
  },
  "resources": [
    {
      "actions": [
        "view"
      ],
      "resource": {
        "id": "XX125",
        "kind": "album:object",
        "attr": {
          "owner": "alicia",
          "public": false,
          "flagged": false
        }
      }
    }
  ]
}
EOF

API response

{
  "requestId": "test01",
  "results": [
    {
      "resource": {
        "id": "XX125",
        "kind": "album:object",
        "policyVersion": "default"
      },
      "actions": {
        "view": "EFFECT_ALLOW"
      },
      "meta": {
        "actions": {
          "view": {
            "matchedPolicy": "resource.album_object.vdefault"
          }
        },
        "effectiveDerivedRoles": [
          "owner"
        ]
      }
    }
  ]
}

Client SDKs

Query plan adapters

Telemetry

We collect anonymous usage data to help us improve the product. You can opt out by setting the CERBOS_NO_TELEMETRY=1 environment variable. For more information about what data we collect and other ways to opt out, see the telemetry documentation.

Join the community on Slack ๐Ÿ’ฌ

๐Ÿ”— Links

Stargazers โญ

Stargazers repo roster for cerbos/cerbos

๐Ÿ›ก๏ธ License

Cerbos is licensed under the Apache License 2.0 - see the LICENSE file for details.

๐Ÿ’ช Thanks To All Contributors

Thanks a lot for spending your time helping Cerbos grow. Keep rocking ๐Ÿฅ‚

Contributors

cerbos-sdk-python's People

Contributors

charithe avatar renovate[bot] avatar sambigeara avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

cerbos-sdk-python's Issues

Vendoring google protos causes collision

We tried to install the cerbos-sdk-python in a project that already uses google protobufs, and got this message:

  โ€ข Installing cerbos (0.10.1): Installing...
  Installing $HOME/Library/Caches/pypoetry/virtualenvs/preferences-service-oGN0U57V-py3.11/lib/python3.11/site-packages/google/api/annotations_pb2.py over existing file
  Installing $HOME/Library/Caches/pypoetry/virtualenvs/preferences-service-oGN0U57V-py3.11/lib/python3.11/site-packages/google/api/field_behavior_pb2.py over existing file
  Installing $HOME/Library/Caches/pypoetry/virtualenvs/preferences-service-oGN0U57V-py3.11/lib/python3.11/site-packages/google/api/http_pb2.py over existing file
  Installing $HOME/Library/Caches/pypoetry/virtualenvs/preferences-service-oGN0U57V-py3.11/lib/python3.11/site-packages/google/api/visibility_pb2.py over existing file
  Installing $HOME/Library/Caches/pypoetry/virtualenvs/preferences-service-oGN0U57V-py3.11/lib/python3.11/site-packages/protoc_gen_openapiv2/options/annotations_pb2.py over existing file
  Installing $HOME/Library/Caches/pypoetry/virtualenvs/preferences-service-oGN0U57V-py3.11/lib/python3.11/site-packages/protoc_gen_openapiv2/options/annotations_pb2.pyi over existing file
  Installing $HOME/Library/Caches/pypoetry/virtualenvs/preferences-service-oGN0U57V-py3.11/lib/python3.11/site-packages/protoc_gen_openapiv2/options/annotations_pb2_grpc.py over existing file
  Installing $HOME/Library/Caches/pypoetry/virtualenvs/preferences-service-oGN0U57V-py3.11/lib/python3.11/site-packages/protoc_gen_openapiv2/options/openapiv2_pb2.py over existing file
  Installing $HOME/Library/Caches/pypoetry/virtualenvs/preferences-service-oGN0U57V-py3.11/lib/python3.11/site-packages/protoc_gen_openapiv2/options/openapiv2_pb2.pyi over existing file

Would it be possible for cerbos-sdk-python to vendor the google protos inside a namespace package, so that they don't collide with the protos provided directly by Google?

Add types-protobuf to requirements

While working with the grpc client, I noticed that pyCharm doesn't recognize the protobuf types.
This makes it very uncomfortable to use resources with schemas that require to instantiate Value objects.

After some digging, I managed to solve this issue by installing the protobuf stubs types-protobuf.
The developer experience for working with the grpc client could be improved, if types-protobuf would be installed along with the SDK.

Add async support

We are using httpx as the backend which already has async support. Most of the work is in the design of the public API of the SDK to make sure it's sensible, customisable and user-friendly.

Usage of gRPC channel options is not compatible with Python 3.8

When using the gRPC Client, the optional parameter channel_options is available to pass gRPC-channel related options to the client.

As of version 0.10.4 of the Cerbos SDK, the processing of these options uses the Python-3.9 operator |= for dicts.
This makes it currently impossible to use the gRPC channel_options with a Python-3.8 codebase and kinda also breaks the intended compatibility with Python 3.8 in general.

In cerbos/sdk/_sync/_grpc.py, line 126, the statement options |= channel_options should be replaced with options.update(channel_options) or something equivalent.

Also consider to update the docs not to use "grpc.service_config" in the channel_options, because dict.update() overrides existing keys, and "grpc.service_config" is already being used by the Cerbos SDK.

Extraneous Importlib dependency breaking build

Issue:

  • Struggling to install sdk on clean project.
  • Appears to be from the importlib package.

Solution:

  • Remove importlib from pyproject.toml
    • This library is an old back port of a Python 2 library.
    • Is now included in the standard library (>3.8) and doesn't throw errors like below.

Detail:

Test docker file:

FROM python:3.9.13

RUN pip install cerbos

Failed build log:

[+] Building 4.6s (5/5) FINISHED                                                
 => [internal] load build definition from Dockerfile                       0.0s
 => => transferring dockerfile: 85B                                        0.0s
 => [internal] load .dockerignore                                          0.0s
 => => transferring context: 2B                                            0.0s
 => [internal] load metadata for docker.io/library/python:3.9.13           0.6s
 => CACHED [1/2] FROM docker.io/library/python:3.9.13@sha256:51c996c8c65d  0.0s
 => ERROR [2/2] RUN pip install cerbos                                     3.9s
------                                                                          
 > [2/2] RUN pip install cerbos:                                                
#5 2.599 Collecting cerbos                                                      
#5 2.794   Downloading cerbos-0.3.0-py3-none-any.whl (11 kB)                    
#5 2.869 Collecting requests-toolbelt>=0.9.1                                    
#5 2.900   Downloading requests_toolbelt-0.9.1-py2.py3-none-any.whl (54 kB)     
#5 2.945      โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 54.3/54.3 KB 1.3 MB/s eta 0:00:00
#5 3.020 Collecting httpx[http2]>=0.22.0
#5 3.053   Downloading httpx-0.23.0-py3-none-any.whl (84 kB)
#5 3.085      โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 84.8/84.8 KB 2.7 MB/s eta 0:00:00
#5 3.174 Collecting dataclasses-json>=0.5.7
#5 3.209   Downloading dataclasses_json-0.5.7-py3-none-any.whl (25 kB)
#5 3.280 Collecting testcontainers>=3.5.3
#5 3.311   Downloading testcontainers-3.6.0-py2.py3-none-any.whl (41 kB)
#5 3.322      โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 41.3/41.3 KB 4.7 MB/s eta 0:00:00
#5 3.380 Collecting importlib>=1.0.4
#5 3.418   Downloading importlib-1.0.4.zip (7.1 kB)
#5 3.434   Preparing metadata (setup.py): started
#5 3.480   Preparing metadata (setup.py): finished with status 'error'
#5 3.486   error: subprocess-exited-with-error
#5 3.486   
#5 3.486   ร— python setup.py egg_info did not run successfully.
#5 3.486   โ”‚ exit code: 1
#5 3.486   โ•ฐโ”€> [1 lines of output]
#5 3.486       ERROR: Can not execute `setup.py` since setuptools is not available in the build environment.
#5 3.486       [end of output]
#5 3.486   
#5 3.486   note: This error originates from a subprocess, and is likely not a problem with pip.
#5 3.490 error: metadata-generation-failed
#5 3.490 
#5 3.490 ร— Encountered error while generating package metadata.
#5 3.490 โ•ฐโ”€> See above for output.
#5 3.490 
#5 3.490 note: This is an issue with the package mentioned above, not pip.
#5 3.490 hint: See above for details.
#5 3.697 WARNING: You are using pip version 22.0.4; however, version 22.1.2 is available.
#5 3.697 You should consider upgrading via the '/usr/local/bin/python -m pip install --upgrade pip' command.
------
executor failed running [/bin/sh -c pip install cerbos]: exit code: 1

Add automatic retries

Add ability to automatically retry operations based on user-provided configuration

Add support for unix domain sockets

Add support for connecting to Cerbos through a Unix domain socket.

Adding tests for this is going to be the tricky part because we have to start a second test container with a UDS listener and direct the UDS tests to use that container.

Batch Policies[Question]

Hello Team Cerbos,

First of all, congratulations on the wonderful development of everything related to Cerbos. Currently, I am using Cerbos in a project, but I wanted to ask if there is any resource in the Cerbos admin or within its SDK for batch querying. In other words, suppose I have a user (Principal), and I want to know what resources they can access, i.e., perform a batch traversal through all policies. This would help me determine what to display and what not to display to the user from the frontend.

Thank you.

Add CI matrix tests against all supported Python versions

Context:

I've done that in the JS and Ruby SDKs - in fact, I define the testing matrix as

  • all supported versions of the PDP with the latest version of the language; plus
  • the latest version of the PDP with all supported versions of the language; plus
  • the current prerelease of the PDP with the latest version of the language (but failure of this one doesn't break the build).

AsyncCerbosClient - raise_on_error flag breaks flow

Async client event_hooks are being awaited by the httpx client on the responses. response.rais_for_status() function (being used with rais_on_error flag on the AsyncCerbosClient) throws an exception for every response due to that

File "/Users/hourone1/proj/my-server/src/core/authorization_service.py", line 61, in check
result = await cerbos.check_resources(principal, resource_list)
File "/Users/hourone1/opt/anaconda3/envs/my-server/lib/python3.8/site-packages/cerbos/sdk/_async/client.py", line 157, in check_resources
resp = await self._http.post("/api/check/resources", json=req.to_dict())
File "/Users/hourone1/opt/anaconda3/envs/my-server/lib/python3.8/site-packages/httpx/_client.py", line 1842, in post
return await self.request(
File "/Users/hourone1/opt/anaconda3/envs/my-server/lib/python3.8/site-packages/httpx/_client.py", line 1527, in request
return await self.send(request, auth=auth, follow_redirects=follow_redirects)
File "/Users/hourone1/opt/anaconda3/envs/my-server/lib/python3.8/site-packages/httpx/_client.py", line 1614, in send
response = await self._send_handling_auth(
File "/Users/hourone1/opt/anaconda3/envs/my-server/lib/python3.8/site-packages/httpx/_client.py", line 1642, in _send_handling_auth
response = await self._send_handling_redirects(
File "/Users/hourone1/opt/anaconda3/envs/my-server/lib/python3.8/site-packages/httpx/_client.py", line 1700, in _send_handling_redirects
raise exc
File "/Users/hourone1/opt/anaconda3/envs/my-server/lib/python3.8/site-packages/httpx/_client.py", line 1682, in _send_handling_redirects
await hook(response)
TypeError: object NoneType can't be used in 'await' expression

ModuleNotFoundError: No module named 'google.api'

Importing grpc AsyncCerbosClient raises error (any entry from cerbos.sdk.grpc.client)

Environment

ฮป python -VV
Fri Jul 21 17:11:34 +06 2023
Python 3.11.4 (main, Jun 20 2023, 10:06:33) [Clang 14.0.0 (clang-1400.0.29.202)]
ฮป pip list
Fri Jul 21 17:12:17 +06 2023
Package            Version
------------------ --------
anyio              3.7.1
cerbos             0.9.0
certifi            2023.5.7
charset-normalizer 3.2.0
dataclasses-json   0.5.13
grpcio             1.56.2
grpcio-tools       1.56.2
h11                0.14.0
h2                 4.1.0
hpack              4.0.0
httpcore           0.17.3
httpx              0.24.1
hyperframe         6.0.1
idna               3.4
marshmallow        3.20.1
mypy-extensions    1.0.0
packaging          23.1
pip                23.1.2
protobuf           4.23.4
requests           2.31.0
requests-toolbelt  1.0.0
setuptools         65.5.0
sniffio            1.3.0
tenacity           8.2.2
typing_extensions  4.7.1
typing-inspect     0.9.0
urllib3            2.0.4

Here minimum steps to reproduce:

ฮป python
Fri Jul 21 17:13:47 +06 2023
Python 3.11.4 (main, Jun 20 2023, 10:06:33) [Clang 14.0.0 (clang-1400.0.29.202)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from cerbos.sdk.grpc.client import AsyncCerbosClient
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/tworedz/workspace/trash/venv/lib/python3.11/site-packages/cerbos/sdk/grpc/client.py", line 4, in <module>
    from cerbos.sdk._async._grpc import (
  File "/Users/tworedz/workspace/trash/venv/lib/python3.11/site-packages/cerbos/sdk/_async/_grpc.py", line 16, in <module>
    from cerbos.engine.v1 import engine_pb2
  File "/Users/tworedz/workspace/trash/venv/lib/python3.11/site-packages/cerbos/engine/v1/engine_pb2.py", line 15, in <module>
    from cerbos.schema.v1 import schema_pb2 as cerbos_dot_schema_dot_v1_dot_schema__pb2
  File "/Users/tworedz/workspace/trash/venv/lib/python3.11/site-packages/cerbos/schema/v1/schema_pb2.py", line 14, in <module>
    from google.api import field_behavior_pb2 as google_dot_api_dot_field__behavior__pb2
ModuleNotFoundError: No module named 'google.api'
>>>

Connecting to localhost or UDS with gRPC client

I didn't investigate this in detail but I am having trouble connecting to localhost (with TLS) using the gRPC client.

UNKNOWN:failed to connect to all addresses; last error: UNAVAILABLE: ipv4:127.0.0.1:3593: Socket closed

with GRPC_VERBOSITY=debug and GRPC_TRACE=all

I0724 17:13:11.215147709  142847 handshaker.cc:100]                    handshake_manager 0x55580ce31150: error=OK shutdown=0 index=0, args={endpoint=(nil), args={grpc.client_channel_factory=0x55580ce086f0, grpc.default_authority=localhost:3593, grpc.internal.channel_credentials=0x55580ce09970, grpc.internal.event_engine=0x55580ccd4610, grpc.internal.security_connector=0x55580cc8daf0, grpc.internal.subchannel_pool=0x55580cd7dd50, grpc.internal.tcp_handshaker_bind_endpoint_to_pollset=1, grpc.internal.tcp_handshaker_resolved_address=ipv4:127.0.0.1:3593, grpc.primary_user_agent=grpc-python/1.56.2, grpc.resource_quota=0x55580ccea2a0, grpc.server_uri=dns:///localhost:3593}, read_buffer=0x55580c94ae80 (length=0), exit_early=0}
I0724 17:13:11.215155726  142847 handshaker.cc:146]                    handshake_manager 0x55580ce31150: calling handshaker tcp_connect [0x55580cc098e0] at index 0
I0724 17:13:11.215263179  142847 tcp_client_posix.cc:387]              CLIENT_CONNECT: ipv4:127.0.0.1:3593: asynchronously connecting fd 0x55580cc06f30
D0724 17:13:11.215267479  142847 timer_generic.cc:341]                 TIMER 0x7f8b64001410: SET 21784 now 1784 call 0x7f8b64001448[0x7f8b737cce00]
D0724 17:13:11.215270995  142847 timer_generic.cc:377]                   .. add to shard 11 with queue_deadline_cap=2782 => is_first_timer=false
I0724 17:13:11.215275498  142847 client_channel.cc:614]                chand=0x55580ce7d820: connectivity change for subchannel wrapper 0x55580cdf6aa0 subchannel 0x55580cd71cc0; hopping into work_serializer
I0724 17:13:11.215279227  142847 client_channel.cc:638]                chand=0x55580ce7d820: processing connectivity change in work serializer for subchannel wrapper 0x55580cdf6aa0 subchannel 0x55580cd71cc0 watcher=0x55580cd286b0 state=CONNECTING status=OK
I0724 17:13:11.215284657  142847 subchannel_list.h:259]                [PickFirstSubchannelList 0x55580c8e02f0] subchannel list 0x55580cd2fa70 index 1 of 2 (subchannel 0x55580cdf6aa0): connectivity changed: old_state=IDLE, new_state=CONNECTING, status=OK, shutting_down=0, pending_watcher=0x55580cd286b0
I0724 17:13:11.215289258  142847 client_channel.cc:922]                chand=0x55580ce7d820: update: state=CONNECTING status=(OK) picker=0x55580ce48750
I0724 17:13:11.215293178  142847 client_channel.cc:2555]               chand=0x55580ce7d820 lb_call=0x7f8b4c0026d0: removing from queued picks list
I0724 17:13:11.215298223  142847 tcp_client_posix.cc:182]              CLIENT_CONNECT: ipv4:127.0.0.1:3593: on_writable: error=OK
D0724 17:13:11.215301091  142847 timer_generic.cc:442]                 TIMER 0x7f8b64001410: CANCEL pending=true
I0724 17:13:11.215305789  142847 memory_quota.cc:450]                  Adding allocator 0x7f8b640053d0
I0724 17:13:11.215312588  142847 executor.cc:294]                      EXECUTOR (default-executor) try to schedule 0x55580cc099c0 (short) to thread 0
I0724 17:13:11.215315978  142847 client_channel.cc:2599]               chand=0x55580ce7d820 lb_call=0x7f8b4c0026d0: grabbing LB mutex to get picker
I0724 17:13:11.215319105  142847 client_channel.cc:2609]               chand=0x55580ce7d820 lb_call=0x7f8b4c0026d0: performing pick with picker=0x55580ce48750
I0724 17:13:11.215322189  142847 client_channel.cc:2701]               chand=0x55580ce7d820 lb_call=0x7f8b4c0026d0: LB pick queued
I0724 17:13:11.215325181  142847 client_channel.cc:2569]               chand=0x55580ce7d820 lb_call=0x7f8b4c0026d0: adding to queued picks list
I0724 17:13:11.215329207  142847 tcp_client_posix.cc:143]              CLIENT_CONNECT: ipv4:127.0.0.1:3593: on_alarm: error=CANCELLED
I0724 17:13:11.215333105  142847 client_channel.cc:3105]               chand=0x55580ce7d820 lb_call=0x7f8b4c0026d0: cancelling queued pick: error=OK self=0x55580ce2bc00 calld->pick_canceller=0x7f8b64004a30
I0724 17:13:11.215336457  142847 executor.cc:222]                      EXECUTOR (default-executor) [0]: step (sub_depth=1)
I0724 17:13:11.215338770  142847 executor.cc:243]                      EXECUTOR (default-executor) [0]: execute
I0724 17:13:11.215341203  142847 executor.cc:122]                      EXECUTOR (default-executor) run 0x55580cc099c0
I0724 17:13:11.215349638  142847 handshaker.cc:100]                    handshake_manager 0x55580ce31150: error=OK shutdown=0 index=1, args={endpoint=0x7f8b64004bb0, args={grpc.client_channel_factory=0x55580ce086f0, grpc.default_authority=localhost:3593, grpc.internal.channel_credentials=0x55580ce09970, grpc.internal.event_engine=0x55580ccd4610, grpc.internal.security_connector=0x55580cc8daf0, grpc.internal.subchannel_pool=0x55580cd7dd50, grpc.primary_user_agent=grpc-python/1.56.2, grpc.resource_quota=0x55580ccea2a0, grpc.server_uri=dns:///localhost:3593}, read_buffer=0x55580c94ae80 (length=0), exit_early=0}
I0724 17:13:11.215356050  142847 handshaker.cc:146]                    handshake_manager 0x55580ce31150: calling handshaker http_connect [0x7f8b64002be0] at index 1
I0724 17:13:11.215361480  142847 handshaker.cc:100]                    handshake_manager 0x55580ce31150: error=OK shutdown=0 index=2, args={endpoint=0x7f8b64004bb0, args={grpc.client_channel_factory=0x55580ce086f0, grpc.default_authority=localhost:3593, grpc.internal.channel_credentials=0x55580ce09970, grpc.internal.event_engine=0x55580ccd4610, grpc.internal.security_connector=0x55580cc8daf0, grpc.internal.subchannel_pool=0x55580cd7dd50, grpc.primary_user_agent=grpc-python/1.56.2, grpc.resource_quota=0x55580ccea2a0, grpc.server_uri=dns:///localhost:3593}, read_buffer=0x55580c94ae80 (length=0), exit_early=0}
I0724 17:13:11.215367847  142847 handshaker.cc:146]                    handshake_manager 0x55580ce31150: calling handshaker security [0x55580ce7da60] at index 2
I0724 17:13:11.215372399  142847 security_context.cc:270]              grpc_auth_context_add_cstring_property(ctx=0x7f8b640011f0, name=transport_security_type, value=insecure)
I0724 17:13:11.215376435  142847 security_context.cc:249]              grpc_auth_context_add_property(ctx=0x7f8b640011f0, name=security_level, value=TSI_SECURITY_NONE, value_length=17)
I0724 17:13:11.215379680  142847 security_context.cc:209]              grpc_auth_context_find_properties_by_name(ctx=0x7f8b640011f0, name=security_level)
I0724 17:13:11.215382488  142847 security_context.cc:183]              grpc_auth_property_iterator_next(it=0x7f8b727e9a40)
I0724 17:13:11.215390293  142847 handshaker.cc:100]                    handshake_manager 0x55580ce31150: error=OK shutdown=0 index=3, args={endpoint=0x7f8b64004bb0, args={grpc.auth_context=0x7f8b640011f0, grpc.client_channel_factory=0x55580ce086f0, grpc.default_authority=localhost:3593, grpc.internal.channel_credentials=0x55580ce09970, grpc.internal.event_engine=0x55580ccd4610, grpc.internal.security_connector=0x55580cc8daf0, grpc.internal.subchannel_pool=0x55580cd7dd50, grpc.primary_user_agent=grpc-python/1.56.2, grpc.resource_quota=0x55580ccea2a0, grpc.server_uri=dns:///localhost:3593}, read_buffer=0x55580c94ae80 (length=0), exit_early=0}
I0724 17:13:11.215400773  142847 handshaker.cc:132]                    handshake_manager 0x55580ce31150: handshaking complete -- scheduling on_handshake_done with error=OK
I0724 17:13:11.215410916  142847 init.cc:149]                          grpc_init(void)
I0724 17:13:11.215413891  142847 memory_quota.cc:450]                  Adding allocator 0x7f8b640056d0
I0724 17:13:11.215448971  142847 flow_control.cc:286]                  [flowctl] UPDATE SETTING INITIAL_WINDOW_SIZE from 65535 to 4194304
I0724 17:13:11.215452345  142847 flow_control.cc:286]                  [flowctl] UPDATE SETTING MAX_FRAME_SIZE from 16384 to 4194304
I0724 17:13:11.215456374  142847 chttp2_transport.cc:918]              W:0x7f8b64012b60 CLIENT [ipv4:127.0.0.1:3593] state IDLE -> WRITING [TRANSPORT_FLOW_CONTROL]
I0724 17:13:11.215459881  142847 chttp2_transport.cc:918]              W:0x7f8b64012b60 CLIENT [ipv4:127.0.0.1:3593] state WRITING -> WRITING+MORE [INITIAL_WRITE]
D0724 17:13:11.215464544  142847 posix_engine.cc:484]                  (event_engine) PosixEventEngine:0x55580cc5c7a0 scheduling callback:{140236654986496,3}
I0724 17:13:11.215470849  142847 chttp2_transport.cc:918]              W:0x7f8b64012b60 CLIENT [ipv4:127.0.0.1:3593] state WRITING+MORE -> WRITING [begin write in current thread]
I0724 17:13:11.215473881  142847 tcp_posix.cc:1804]                    WRITE 0x7f8b64004bb0 (peer=ipv4:127.0.0.1:3593)
D0724 17:13:11.215477299  142847 tcp_posix.cc:1808]                    WRITE DATA: 50 52 49 20 2a 20 48 54 54 50 2f 32 2e 30 0d 0a 0d 0a 53 4d 0d 0a 0d 0a 'PRI * HTTP/2.0....SM....'
I0724 17:13:11.215480205  142847 tcp_posix.cc:1804]                    WRITE 0x7f8b64004bb0 (peer=ipv4:127.0.0.1:3593)
D0724 17:13:11.215483720  142847 tcp_posix.cc:1808]                    WRITE DATA: 00 00 24 04 00 00 00 00 00 00 02 00 00 00 00 00 03 00 00 00 00 00 04 00 40 00 00 00 05 00 40 00 00 00 06 00 00 40 00 fe 03 00 00 00 01 '..$.....................@.....@......@.......'
I0724 17:13:11.215487078  142847 tcp_posix.cc:1804]                    WRITE 0x7f8b64004bb0 (peer=ipv4:127.0.0.1:3593)
D0724 17:13:11.215489533  142847 tcp_posix.cc:1808]                    WRITE DATA: 00 00 04 08 00 00 00 00 00 00 3f 00 01 '..........?..'
I0724 17:13:11.215510782  142847 tcp_posix.cc:1852]                    write: OK
I0724 17:13:11.215514104  142847 chttp2_transport.cc:918]              W:0x7f8b64012b60 CLIENT [ipv4:127.0.0.1:3593] state WRITING -> IDLE [finish writing]
D0724 17:13:11.215518482  142847 posix_engine.cc:484]                  (event_engine) PosixEventEngine:0x55580cc5c7a0 scheduling callback:{140236654987264,4}
I0724 17:13:11.215523117  142847 chttp2_transport.cc:2958]             ipv4:127.0.0.1:3593: Keepalive ping cancelled. Resetting timer.
D0724 17:13:11.215526136  142847 posix_engine.cc:484]                  (event_engine) PosixEventEngine:0x55580cc5c7a0 scheduling callback:{140236654986496,5}
I0724 17:13:11.215529469  142847 tcp_posix.cc:670]                     TCP:0x7f8b64004bb0 notify_on_read
I0724 17:13:11.215532065  142847 executor.cc:222]                      EXECUTOR (default-executor) [0]: step (sub_depth=1)
I0724 17:13:11.215595082  142846 tcp_posix.cc:1086]                    TCP:0x7f8b64004bb0 got_read: OK
I0724 17:13:11.215600686  142846 tcp_posix.cc:876]                     TCP:0x7f8b64004bb0 do_read
I0724 17:13:11.215608438  142846 tcp_posix.cc:811]                     TCP:0x7f8b64004bb0 call_cb 0x7f8b64012cf0 0x7f8b73630030:0x7f8b64012b60
I0724 17:13:11.215614828  142846 tcp_posix.cc:813]                     READ 0x7f8b64004bb0 (peer=ipv4:127.0.0.1:3593) error=INTERNAL:Socket closed {target_address:"ipv4:127.0.0.1:3593", grpc_status:14, fd:5}
I0724 17:13:11.215638815  142846 chttp2_transport.cc:2980]             transport 0x7f8b64012b60 set connectivity_state=4
I0724 17:13:11.215641971  142846 connectivity_state.cc:160]            ConnectivityStateTracker client_transport[0x7f8b64012e50]: READY -> SHUTDOWN (close_transport, OK)
I0724 17:13:11.215695486  142846 subchannel.cc:707]                    subchannel 0x55580cd71cc0 {address=ipv4:127.0.0.1:3593, args={grpc.client_channel_factory=0x55580ce086f0, grpc.default_authority=localhost:3593, grpc.internal.channel_credentials=0x55580ce09970, grpc.internal.event_engine=0x55580ccd4610, grpc.internal.security_connector=0x55580cc8daf0, grpc.internal.subchannel_pool=0x55580cd7dd50, grpc.primary_user_agent=grpc-python/1.56.2, grpc.resource_quota=0x55580ccea2a0, grpc.server_uri=dns:///localhost:3593}}: connect failed (UNKNOWN:Endpoint read failed {occurred_during_write:0, created_time:"2023-07-24T17:13:11.215618718+01:00", children:[INTERNAL:Socket closed {target_address:"ipv4:127.0.0.1:3593", grpc_status:14, fd:5}]}), backing off for 1000 ms
D0724 17:13:11.215711676  142846 posix_engine.cc:484]                  (event_engine) PosixEventEngine:0x55580cc5c7a0 scheduling callback:{140236654987264,6}
I0724 17:13:11.215715096  142846 client_channel.cc:614]                chand=0x55580ce7d820: connectivity change for subchannel wrapper 0x55580cdf6aa0 subchannel 0x55580cd71cc0; hopping into work_serializer
I0724 17:13:11.215718555  142846 client_channel.cc:638]                chand=0x55580ce7d820: processing connectivity change in work serializer for subchannel wrapper 0x55580cdf6aa0 subchannel 0x55580cd71cc0 watcher=0x55580cd286b0 state=TRANSIENT_FAILURE status=UNAVAILABLE: ipv4:127.0.0.1:3593: Socket closed
I0724 17:13:11.215722909  142846 subchannel_list.h:259]                [PickFirstSubchannelList 0x55580c8e02f0] subchannel list 0x55580cd2fa70 index 1 of 2 (subchannel 0x55580cdf6aa0): connectivity changed: old_state=CONNECTING, new_state=TRANSIENT_FAILURE, status=UNAVAILABLE: ipv4:127.0.0.1:3593: Socket closed, shutting_down=0, pending_watcher=0x55580cd286b0
I0724 17:13:11.215726898  142846 pick_first.cc:412]                    Pick First 0x55580c8e02f0 subchannel list 0x55580cd2fa70 failed to connect to all subchannels
I0724 17:13:11.215729844  142846 child_policy_handler.cc:101]          [child_policy_handler 0x55580cc9bf40] started name re-resolving
I0724 17:13:11.215732563  142846 client_channel.cc:937]                chand=0x55580ce7d820: started name re-resolving
I0724 17:13:11.215735864  142846 polling_resolver.cc:245]              [polling resolver 0x55580cc08b00] in cooldown from last resolution (from 2 ms ago); will resolve again in 29998 ms
D0724 17:13:11.215739284  142846 posix_engine.cc:484]                  (event_engine) PosixEventEngine:0x55580cc5c7a0 scheduling callback:{140236654986496,7}
I0724 17:13:11.215743409  142846 client_channel.cc:922]                chand=0x55580ce7d820: update: state=TRANSIENT_FAILURE status=(UNAVAILABLE: failed to connect to all addresses; last error: UNAVAILABLE: ipv4:127.0.0.1:3593: Socket closed) picker=0x55580cca9cf0
I0724 17:13:11.215747544  142846 connectivity_state.cc:160]            ConnectivityStateTracker client_channel[0x55580ce7d918]: CONNECTING -> TRANSIENT_FAILURE (helper, UNAVAILABLE: failed to connect to all addresses; last error: UNAVAILABLE: ipv4:127.0.0.1:3593: Socket closed)
I0724 17:13:11.215751690  142846 client_channel.cc:2555]               chand=0x55580ce7d820 lb_call=0x7f8b4c0026d0: removing from queued picks list
I0724 17:13:11.215755605  142846 client_channel.cc:2599]               chand=0x55580ce7d820 lb_call=0x7f8b4c0026d0: grabbing LB mutex to get picker
I0724 17:13:11.215758534  142846 client_channel.cc:2609]               chand=0x55580ce7d820 lb_call=0x7f8b4c0026d0: performing pick with picker=0x55580cca9cf0
I0724 17:13:11.215761947  142846 client_channel.cc:2709]               chand=0x55580ce7d820 lb_call=0x7f8b4c0026d0: LB pick failed: UNAVAILABLE: failed to connect to all addresses; last error: UNAVAILABLE: ipv4:127.0.0.1:3593: Socket closed
I0724 17:13:11.215769625  142846 client_channel.cc:2638]               chand=0x55580ce7d820 lb_call=0x7f8b4c0026d0: failed to pick subchannel: error=UNKNOWN:failed to connect to all addresses; last error: UNAVAILABLE: ipv4:127.0.0.1:3593: Socket closed {grpc_status:14, created_time:"2023-07-24T17:13:11.215765125+01:00"}
I0724 17:13:11.215777313  142846 client_channel.cc:2829]               chand=0x55580ce7d820 lb_call=0x7f8b4c0026d0: failing 1 pending batches: UNKNOWN:failed to connect to all addresses; last error: UNAVAILABLE: ipv4:127.0.0.1:3593: Socket closed {grpc_status:14, created_time:"2023-07-24T17:13:11.215765125+01:00"}

It might be because we need to use https://grpc.github.io/grpc/python/grpc.html#grpc.local_channel_credentials?

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

This repository currently has no open or pending branches.

Detected dependencies

github-actions
.github/workflows/pr.yaml
  • actions/checkout v4
  • actions/cache v4
.github/workflows/release.yaml
  • actions/checkout v4
  • actions/cache v4
pep621
pyproject.toml
  • dataclasses-json >=0.5.7
  • requests-toolbelt >=0.9.1
  • httpx >=0.22.0
  • anyio >=3.6.1
  • tenacity >=8.1.0
  • grpcio-tools >=1.54.2
  • types-protobuf >=4.24.0.1
  • protoc-gen-openapiv2 >=0.0.1
  • googleapis-common-protos >=1.62.0
  • testcontainers/testcontainers >=3.5.3
  • lint/black >=22.3.0
  • lint/isort >=5.10.1
  • test/pytest >=7.3.1
  • tools/unasync >=0.5.0
  • tools/setuptools >=63.2.0
  • tools/commitizen >=3.2.2
  • tools/ptpython >=3.0.23
  • tools/pyyaml >=6.0.1

  • Check this box to trigger a request for Renovate to run again on this repository

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.