Giter Site home page Giter Site logo

api-client-python's Introduction

dt - Dynatrace Python API Client

dt is a Python client for the Dynatrace Rest API.
It focuses on ease of use and nice type hints, perfect to explore the API and create quick scripts

Install

$ pip install dt

Simple Demo

from dynatrace import Dynatrace
from dynatrace import TOO_MANY_REQUESTS_WAIT
from dynatrace.environment_v2.tokens_api import SCOPE_METRICS_READ, SCOPE_METRICS_INGEST
from dynatrace.configuration_v1.credential_vault import PublicCertificateCredentials
from dynatrace.environment_v2.settings import SettingsObject, SettingsObjectCreate

from datetime import datetime, timedelta

# Create a Dynatrace client
dt = Dynatrace("environment_url", "api_token")

# Create a client that handles too many requests (429)
# dt = Dynatrace("environment_url", "api_token", too_many_requests_strategy=TOO_MANY_REQUESTS_WAIT )

# Create a client that automatically retries on errors, up to 5 times, with a 1 second delay between retries
# dt = Dynatrace("environment_url", "api_token", retries=5, retry_delay_ms=1000 )

# Create a client with a custom HTTP timeout of 10 seconds
# dt = Dynatrace("environment_url", "api_token", timeout=10 )


# Get all hosts and some properties
for entity in dt.entities.list('type("HOST")', fields="properties.memoryTotal,properties.monitoringMode"):
    print(entity.entity_id, entity.display_name, entity.properties)

# Get idle CPU for all hosts
for metric in dt.metrics.query("builtin:host.cpu.idle", resolution="Inf"):
    print(metric)

# Print dimensions, timestamp and values for the AWS Billing Metric
for metric in dt.metrics.query("ext:cloud.aws.billing.estimatedChargesByRegionCurrency"):
    for data in metric.data:
        for timestamp, value in zip(data.timestamps, data.values):
            print(data.dimensions, timestamp, value)

# Get all ActiveGates
for ag in dt.activegates.list():
    print(ag)

# Get metric descriptions for all host metrics
for m in dt.metrics.list("builtin:host.*"):
    print(m)

# Delete endpoints that contain the word test
for plugin in dt.plugins.list():

    # This could also be dt.get_endpoints(plugin.id)
    for endpoint in plugin.endpoints:
        if "test" in endpoint.name:
            endpoint.delete(plugin.id)

# Prints dashboard ID, owner and number of tiles
for dashboard in dt.dashboards.list():
    full_dashboard = dashboard.get_full_dashboard()
    print(full_dashboard.id, dashboard.owner, len(full_dashboard.tiles))

# Delete API Tokens that haven't been used for more than 3 months
for token in dt.tokens.list(fields="+lastUsedDate,+scopes"):
    if token.last_used_date and token.last_used_date < datetime.now() - timedelta(days=90):
        print(f"Deleting token! {token}, last used date: {token.last_used_date}")

# Create an API Token that can read and ingest metrics
new_token = dt.tokens.create("metrics_token", scopes=[SCOPE_METRICS_READ, SCOPE_METRICS_INGEST])
print(new_token.token)

# Upload a public PEM certificate to the Credential Vault
with open("ca.pem", "r") as f:
    ca_cert = f.read()

my_cred = PublicCertificateCredentials(
    name="my_cred",
    description="my_cred description",
    scope="EXTENSION",
    owner_access_only=False,
    certificate=ca_cert,
    password="",
    credential_type="PUBLIC_CERTIFICATE",
    certificate_format="PEM"
)

r = dt.credentials.post(my_cred)
print(r.id)

# Create a new settings 2.0 object
settings_value = {
    "enabled": True,
    "summary": "DT API TEST 1",
    "queryDefinition": {
        "type": "METRIC_KEY",
        "metricKey": "netapp.ontap.node.fru.state",
        "aggregation": "AVG",
        "entityFilter": {
            "dimensionKey": "dt.entity.netapp_ontap:fru",
            "conditions": [],
        },
        "dimensionFilter": [],
    },
    "modelProperties": {
        "type": "STATIC_THRESHOLD",
        "threshold": 100.0,
        "alertOnNoData": False,
        "alertCondition": "BELOW",
        "violatingSamples": 3,
        "samples": 5,
        "dealertingSamples": 5,
    },
    "eventTemplate": {
        "title": "OnTap {dims:type} {dims:fru_id} is in Error State",
        "description": "OnTap field replaceable unit (FRU) {dims:type} with id {dims:fru_id} on node {dims:node} in cluster {dims:cluster} is in an error state.\n",
        "eventType": "RESOURCE",
        "davisMerge": True,
        "metadata": [],
    },
    "eventEntityDimensionKey": "dt.entity.netapp_ontap:fru",
}

settings_object = SettingsObjectCreate(schema_id="builtin:anomaly-detection.metric-events", value=settings_value, scope="environment")
dt.settings.create_object(validate_only=False, body=settings_object)

Implementation Progress

Environment API V2

API Level Access
Access Tokens - API tokens ✔️ dt.tokens
Access tokens - Tenant tokens ✔️ dt.tenant_tokens
ActiveGates ✔️ dt.activegates
ActiveGates - Auto-update configuration ✔️ dt.activegates_autoupdate_configuration
ActiveGates - Auto-update jobs ✔️ dt.activegates_autoupdate_jobs
Audit Logs ✔️ dt.audit_logs
Events ⚠️ dt.events_v2
Extensions 2.0 ✔️ dt.extensions_v2
Logs ⚠️ dt.logs
Metrics ✔️ dt.metrics
Monitored entities ⚠️ dt.entities
Monitored entities - Custom tags ✔️ dt.custom_tags
Network zones ⚠️ dt.network_zones
Problems ✔️ dt.problems
Security problems
Service-level objectives ✔️ dt.slos
Settings ⚠️ dt.settings

Environment API V1

API Level Access
Anonymization
Cluster time ✔️ dt.cluster_time
Cluster version
Custom devices ✔️ dt.custom_devices
Deployment ✔️ dt.deployment
Events ⚠️ dt.events
JavaScript tag management
Log monitoring - Custom devices
Log monitoring - Hosts
Log monitoring - Process groups
Maintenance window
OneAgent on a host ⚠️ dt.oneagents
Problem
Synthetic - Locations and nodes
Synthetic - Monitors ⚠️ dt.synthetic_monitors
Synthetic - Third party ✔️ dt.third_part_synthetic_tests
Threshold
Timeseries ⚠️ dt.timeseries
Tokens
Topology & Smartscape - Application
Topology & Smartscape - Custom device ⚠️ dt.custom_devices
Topology & Smartscape - Host ⚠️ dt.smartscape_hosts
Topology & Smartscape - Process
Topology & Smartscape - Process group
Topology & Smartscape - Service
User sessions

Configuration API V1

API Level Access
Alerting Profiles ⚠️ dt.alerting_profiles
Anomaly detection - Applications
Anomaly detection - AWS
Anomaly detection - Database services
Anomaly detection - Disk events
Anomaly detection - Hosts
Anomaly detection - Metric events ⚠️ dt.anomaly_detection_metric_events
Anomaly detection - Process groups ⚠️ dt.anomaly_detection_process_groups
Anomaly detection - Services
Anomaly detection - VMware
Automatically applied tags ⚠️ dt.auto_tags
AWS credentials configuration
AWS PrivateLink
Azure credentials configuration
Calculated metrics - Log monitoring
Calculated metrics - Mobile & custom applications
Calculated metrics - Services
Calculated metrics - Synthetic
Calculated metrics - Web applications
Cloud Foundry credentials configuration
Conditional naming
Credential vault
Custom tags ✔️ dt.custom_tags
Dashboards ⚠️ dt.dashboards
Data privacy and security
Extensions ✔️ dt.extensions
Frequent issue detection
Kubernetes credentials configuration
Maintenance windows ⚠️ dt.maintenance_windows
Management zones ⚠️ dt.management_zones
Notifications ⚠️ dt.notifications
OneAgent - Environment-wide configuration ✔️ dt.oneagents_config_environment
OneAgent in a host group ✔️ dt.oneagents_config_hostgroup
OneAgent on a host ✔️ dt.oneagents_config_host
Plugins ⚠️ dt.plugins
Remote environments
Reports
RUM - Allowed beacon origins for CORS
RUM - Application detection rules
RUM - Application detection rules - Host detection
RUM - Content resources
RUM - Geographic regions - custom client IP headers
RUM - Geographic regions - IP address mapping
RUM - Mobile and custom application configuration
RUM - Web application configuration
Service - Custom services
Service - Detection full web request
Service - Detection full web service
Service - Detection opaque and external web request
Service - Detection opaque and external web service
Service - Failure detection parameter sets
Service - Failure detection rules
Service - IBM MQ tracing
Service - Request attributes
Service - Request naming

api-client-python's People

Contributors

bilalhashmi-dt avatar dlopes7 avatar dynatrace-james-kitson avatar jimm-with-a-j avatar nkitanov-sap avatar pterygota avatar radu-stefan-dt avatar sbricoutmac avatar vbalbp avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

api-client-python's Issues

Implementation Progress up-to-dateness

I was wondering about the freshness of the "Implementation Progress" tables of the Readme: does it reflect the current status of all API routes ?
For instance, I see that Alerting Profiles (under Configuration API V1) is marked as ❌, but I also see that alerting_profiles.py implements the connection to the alertingProfiles endpoint.

Add support for OneAgent remote configuration management API

We are using this api client as part of a script that looks for hosts without the proper network zone set and then to set the correct network zone using a remote configuration management job. We will also likely do this same process for host metadata corrections. Right now I can use this api client library to pull the entities information nicely but I can not create the configuration jobs. As a workaround I can make the API calls myself but it would sure be nice to have this added to this library.

https://docs.dynatrace.com/docs/dynatrace-api/environment-api/remote-configuration/oneagent/post-config-job

Please support mzSelector

Is your feature request related to a problem? Please describe.
I need to calculate DEM consumption based on some billing metrics. Today I have to bother the system with a set of queries, one per Type. If mzSelector of the metric API could be used, I could omit the entitySelector, which forces to set the type. One single query would be enough.

Describe the solution you'd like
Extent metric.py by an additional Parameter.

Describe alternatives you've considered
NA

Additional context
NA

Retrieving problem details via api-client results in an error

I'm encountering an issue when trying to retrieve a problem using the api-client. Stacktrace:

Traceback (most recent call last):
  File "/snap/pycharm-community/256/plugins/python-ce/helpers/pydev/pydevd.py", line 1483, in _exec
    pydev_imports.execfile(file, globals, locals)  # execute the script
  File "/snap/pycharm-community/256/plugins/python-ce/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
    exec(compile(contents+"\n", file, 'exec'), glob, loc)
  File "/home/rene/.config/JetBrains/PyCharmCE2021.2/scratches/scratch.py", line 84, in <module>
    problem = get_problem_by_id('4323246839104750503_1636589220000V2')
  File "/home/rene/.config/JetBrains/PyCharmCE2021.2/scratches/scratch.py", line 70, in get_problem_by_id
    pd_problem = dt.problems.get(pid)
  File "/devPath/venv/lib/python3.8/site-packages/dynatrace/environment_v2/problems.py", line 73, in get
    return Problem(raw_element=response)
  File "/devPath/venv/lib/python3.8/site-packages/dynatrace/dynatrace_object.py", line 32, in __init__
    self._create_from_raw_data(raw_element)
  File "/devPath/venv/lib/python3.8/site-packages/dynatrace/environment_v2/problems.py", line 161, in _create_from_raw_data
    self.evidence_details: Optional[EvidenceDetails] = EvidenceDetails(raw_element=raw_element.get("evidenceDetails"))
  File "/devPath/venv/lib/python3.8/site-packages/dynatrace/dynatrace_object.py", line 32, in __init__
    self._create_from_raw_data(raw_element)
  File "/devPath/venv/lib/python3.8/site-packages/dynatrace/environment_v2/problems.py", line 200, in _create_from_raw_data
    [EventEvidence(raw_element=e) for e in raw_details if e.get("evidenceType") == EvidenceType.EVENT.value]
  File "/devPath/venv/lib/python3.8/site-packages/dynatrace/environment_v2/problems.py", line 200, in <listcomp>
    [EventEvidence(raw_element=e) for e in raw_details if e.get("evidenceType") == EvidenceType.EVENT.value]
  File "/devPath/venv/lib/python3.8/site-packages/dynatrace/dynatrace_object.py", line 32, in __init__
    self._create_from_raw_data(raw_element)
  File "/devPath/venv/lib/python3.8/site-packages/dynatrace/environment_v2/problems.py", line 220, in _create_from_raw_data
    super()._create_from_raw_data(raw_element)
  File "/devPath/venv/lib/python3.8/site-packages/dynatrace/environment_v2/problems.py", line 215, in _create_from_raw_data
    self.grouping_entity: Optional[EntityStub] = EntityStub(raw_element=raw_element.get("groupingEntity"))
  File "/devPath/venv/lib/python3.8/site-packages/dynatrace/dynatrace_object.py", line 32, in __init__
    self._create_from_raw_data(raw_element)
  File "/devPath/venv/lib/python3.8/site-packages/dynatrace/environment_v2/monitored_entities.py", line 201, in _create_from_raw_data
    self.entity_id: EntityId = EntityId(raw_element=raw_element["entityId"])
KeyError: 'entityId'
python-BaseException

Process finished with exit code 1

Failure happens here:
https://github.com/dynatrace-oss/api-client-python/blob/main/dynatrace/environment_v2/problems.py#L161

Steps to reproduce

from dynatrace import Dynatrace

# initialize Dynatrace client
dt = Dynatrace(cfg['services']['dynatrace']['environment_url'], cfg['services']['dynatrace']['api_token'], retries=5, retry_delay_ms=1000)

def get_problem_by_id(pid: str):
    """
    Get details via problem_id
    :param pid (str) : problem_id string
    """
    pd_problem = dt.problems.get(pid)
    return pd_problem

problem = get_problem_by_id('<problem_id of existing problem>')
...

Datetime to timestamp translations may results in unintended time slips

int64_to_datetime and datetime_to_int64 from dynatrace.utils may result in unwanted time differences when gathering Dynatrace objects with timestamp details and then converting back into JSON or sending back to Dynatrace.

int64_to_datetime

  • is used everywhere when taking in a JSON that has timestamps and turning into a DynatraceObject
  • it uses datetime.utcfromtimestamp() method which gives a datetime object interpreting the timestamp as UTC, however, the created datetime object does not carry timezone information (is naive, not aware).
  • this means, next time I do datetime to int, my datetime will be converted to the int64 representing the datetime in my PC's local timezone.
  • https://docs.python.org/3/library/datetime.html#datetime.datetime.utcfromtimestamp

datetime_to_int64

  • is used when doing the to_json() translation - for example - before sending a DynatraceObject as payload to dynatrace
  • this relies on the timestamp argument to already carry the timezone information set to UTC (to match Dynatrace's expectation)
  • however, if the user only used our framework to work with these objects, this will not result in the correct int64 value
  • https://docs.python.org/3/library/datetime.html#datetime.datetime.utcfromtimestamp

I propose UTC is kept all throughout the framework to keep times sync'ed with Dynatrace API expectations/formats.

Existing code:

def int64_to_datetime(timestamp: Optional[int]) -> Optional[datetime]:
    if timestamp is None or not timestamp:
        return None
    return datetime.utcfromtimestamp(timestamp / 1000)


def datetime_to_int64(timestamp: Optional[datetime]) -> Optional[int]:
    if not isinstance(timestamp, datetime):
        return timestamp
    return timestamp.timestamp() * 1000

Proposed changes:

from datetime import timezone

...

def int64_to_datetime(timestamp: Optional[int]) -> Optional[datetime]:
    if timestamp is None or not timestamp:
        return None
    return datetime.fromtimestamp(timestamp / 1000, timezone.utc)


def datetime_to_int64(timestamp: Optional[datetime]) -> Optional[int]:
    if not isinstance(timestamp, datetime):
        return timestamp
    return timestamp.replace(tzinfo=timezone.utc).timestamp() * 1000

Exception returned when getting notification config that references deleted alerting profile

I'll be fixing this but I came across an issue where as part of gathering problem notifications currently it uses the alerting profile id to make a call and get the full alerting profile configuration. The issue is that if an alerting profile is deleted, the notification still refers to that ID but if you make the call to look it up it returns a 404 which causes getting the whole notification to fail with an exception.

self.alerting_profile: AlertingProfile = AlertingProfile(
raw_element=self._http_client.make_request(f"/api/config/v1/alertingProfiles/{raw_element.get('alertingProfile')}").json()
)

Exception: Error making request to https://example.com/api/config/v1/alertingProfiles/7693d108-fda9-3529-8ca3-55e9269b6097: <Response [404]>. Response: {"error":{"code":404,"message":"Could not find config 7693d108-fda9-3529-8ca3-55e9269b6097."}}

I see two ways of fixing this:

  1. Handle the exception when initializing the Notification object and set the alerting profile to None. This has the benefit of keeping the feature of automatically pulling in the profile details (when they exist) for notifications when listing them.
  2. Revert to instead just reporting the string alerting profile ID when creating the notification object and then it is up to the user to make a call and get the profile details if they want them and handling if a 404 is returned.

As of now I'm leaning towards 1.

Add support for metricExpression in SLOs

Is your feature request related to a problem? Please describe.
Yes

Describe the solution you'd like
Add metricExpression support to SloService.create and to the Slo class.

Describe alternatives you've considered
There is no alternative solutions found yet

Additional context
Since metricDenominator, metricNumerator and metricRate are deprecated, to create a new SLO, a metricExpression field is required. The existing API does not pass this field to the body of the JSON request.

Issue in Exporting logs through API

** Description**
I am using dt-1.1.64 version to export log following guideline https://docs.dynatrace.com/docs/dynatrace-api/environment-api/log-monitoring-v2/git-export-logs
I have created Dynatrace instance using API token having 'Read logs' scope along with other scope. Tried to read simple log using below command
logs = dt.logs.export(time_from="now-10m" )
logs = dt.logs.export("loglevel="ERROR"")

Expected behavior
Dynatrace log should be exported

Screenshots
Getting error 'HTTP ERROR 501 Not Implemented'

<title>Error 501 Not Implemented</title>

HTTP ERROR 501 Not Implemented

URI:/api/v2/logs/export
STATUS:501
MESSAGE:Not Implemented
![Registered_Error](https://github.com/user-attachments/assets/b549930d-2e0c-4c86-a3c2-d5090da1538a)

Additional context
Add any other context about the problem here.

Add support for startTimestamp/endTimestamp on smartscape_hosts.list()

Is your feature request related to a problem? Please describe.

I am looking to collect metrics around host unit consumption over time via the API. The smartscape_hosts.list() returns the information I need, but it only accepts a relative timeframe for framing the request. As I am collecting/trending this data over time, I need to be able to specify the exact window of time (like 1PM to 2PM on X).

Describe the solution you'd like

I would like to see optional parameters added for startTimestamp and endTimestamp on smartscape_hosts.list(). Another option would be to add an alternate list method (like list_absolute()) that accepts these parameters versus a relative timeframe.

Describe alternatives you've considered

At this point my alternatives would be to roll my own solution for this specific data collection or attempt to apply changes to the existing client.

Additional context

I would be willing to implement/PR the changes. My only question would be, which method would be the preferred implementation (i.e. add parameters to list() or add an alternate list_absolute() or other method)?

Thank you!

Validation error on boolean fields (slos.create)

Describe the bug
Dynatrace expects boolean fields in JSON notation (lowercase, true or false), yet they are sent to Dynatrace in Pythonic notation (True or False) when using slos.create method.

To Reproduce
dt.slos.create(name="abc", target=98, warning=99, timeframe="-24h", use_rate_metric=False, enabled=True [...])
Dynatrace will throw a validation error on the enabled and useRateMetric fields.

Additional context
There is a workaround for this bug - by replacing boolean values with lowercase strings (e.g. enabled="false"), they are not manually converted and so the request will succeed.

Support Management Zone

I saw in the history activities regarding mz support (conf api). Are there activities ongoing? We could need this to create our mz using this module.

Add monitored_entity type field

The type for most monitored entities can be determined by the start of the entityId.
However for custom devices, the additional field "type" should be used.

The class EntityId (used in relationships) already holds this field, but it would be useful in class Entity also.

add from/to in monitored entities query

Is your feature request related to a problem? Please describe.
I was trying to pull a list of entities and noticed the numbers didnt match the gui, it appears that this is because it is more than 3days since it was used.

Describe the solution you'd like
to be able to do something like this::
for entity in dt.entities.list('type("APPLICATION")', fields="tags",from="-365d",to="now"):

Describe alternatives you've considered
manually checking the apps for what tags they have

Additional context
Add any other context or screenshots about the feature request here.

JSONDecodeError when parsing response from DELETE call

Describe the bug
I'm receiving a JSONDecodeError while validating response from call to dt.settings.delete_object(objectid)
Json encoding is trying to retrieve the msg content, while no content is returned on a succesfull delete call

To Reproduce
Steps to reproduce the behavior:

(116) #delete settings object by id
(117) dresp = dt.settings.delete_object(so.object_id)
(118) log.info("\tdeleted: %s %s", so.object_id, str(dresp))

Expected behavior
dresp should contain [{ "code": 204 }]

Stacktrace

Traceback (most recent call last):
File "C:\Data\git\ISHSSD\dynatrace_tools\monaco_externalid\venv\lib\site-packages\requests\models.py", line 974, in json
return complexjson.loads(self.text, **kwargs)
File "C:\Apps\Python310\lib\json_init_.py", line 346, in loads
return _default_decoder.decode(s)
File "C:\Apps\Python310\lib\json\decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "C:\Apps\Python310\lib\json\decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Data\git\ISHSSD\dynatrace_tools\monaco_externalid\bin\update_settings_metadata.py", line 129, in
main()
File "C:\Data\git\ISHSSD\dynatrace_tools\monaco_externalid\bin\update_settings_metadata.py", line 117, in main
dresp = dt.settings.delete_object(so.object_id)
File "C:\Data\git\ISHSSD\dynatrace_tools\monaco_externalid\venv\lib\site-packages\dynatrace\environment_v2\settings.py", line 120, in delete_object
).json()
File "C:\Data\git\ISHSSD\dynatrace_tools\monaco_externalid\venv\lib\site-packages\requests\models.py", line 978, in json
raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)
requests.exceptions.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

Add support for events ingestion

Is your feature request related to a problem? Please describe.
No

Describe the solution you'd like
Implementation of the /api/v2/events/ingest endpoint

Describe alternatives you've considered
I'm already using the library for metrics ingestion and logs ingestion, so the alternative would be to create the HTTP client from scratch only to interact with this endpoint, which is not really desirable.

Additional context
I'm developing an integration from Oracle Cloud to Dynatrace

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.