Giter Site home page Giter Site logo

jenningsloy318 / redfish_exporter Goto Github PK

View Code? Open in Web Editor NEW
67.0 15.0 62.0 1.26 MB

exporter to get metrics from redfish based hardware such as lenovo/dell/superc servers

License: Apache License 2.0

Makefile 2.88% Go 93.32% Shell 0.72% Dockerfile 0.92% Smarty 2.15%
xcc redfish prometheus-exporter

redfish_exporter's Introduction

redfish_exporter

A prometheus exporter to get metrics from redfish based servers such as lenovo/dell/Supermicro servers.

Configuration

An example configure given as an example:

hosts:
  10.36.48.24:
    username: admin
    password: pass
  default:
    username: admin
    password: pass
groups:
  group1:
    username: group1_user
    password: group1_pass

Note that the default entry is useful as it avoids an error condition that is discussed in this issue.

Building

To build the redfish_exporter executable run the command:

make build

or build in centos 7 docker image

make docker-build-centos7

or build in centos 8 docker image

make docker-build-centos8

or we can also build a docker image using Dockerfile

Running

  • running directly on linux

    redfish_exporter --config.file=redfish_exporter.yml

    and run redfish_exporter -h for more options.

  • running in container

    Also if you build it as a docker image, you can also run in container, just remember to replace your config /etc/prometheus/redfish_exporter.yml in container

Scraping

We can get the metrics via

curl http://<redfish_exporter host>:9610/redfish?target=10.36.48.24

or by pointing your favourite browser at this URL.

Reloading Configuration

PUT /-/reload
POST /-/reload

The /-/reload endpoint triggers a reload of the redfish_exporter configuration. 500 will be returned when the reload fails.

Alternatively, a configuration reload can be triggered by sending SIGHUP to the redfish_exporter process as well.

Prometheus Configuration

You can then setup Prometheus to scrape the target using something like this in your Prometheus configuration files:

  - job_name: 'redfish-exporter'

    # metrics_path defaults to '/metrics'
    metrics_path: /redfish

    # scheme defaults to 'http'.

    static_configs:
    - targets:
       - 10.36.48.24 ## here is the list of the redfish targets which will be monitored
    relabel_configs:
      - source_labels: [__address__]
        target_label: __param_target
      - source_labels: [__param_target]
        target_label: instance
      - target_label: __address__
        replacement: localhost:9610  ### the address of the redfish-exporter address, hence relpace localhost with the server IP address that redfish-export is running on
      # (optional) when using group config add this to have group=my_group_name
      - target_label: __param_group
        replacement: my_group_name

Note that port 9610 has been reserved for the redfish_exporter.

Supported Devices (tested)

  • Enginetech EG520R-G20 (Supermicro Firmware Revision 1.76.39)
  • Enginetech EG920A-G20 (Huawei iBMC 6.22)
  • Lenovo ThinkSystem SR850 (BMC 2.1/2.42)
  • Lenovo ThinkSystem SR650 (BMC 2.50)
  • Dell PowerEdge R440, R640, R650, R6515, C6420
  • GIGABYTE G292-Z20, G292-Z40, G482-Z54

Acknowledgement

  • gofish provides the underlying library to interact servers

redfish_exporter's People

Contributors

chiveturkey avatar dalembert avatar dependabot[bot] avatar dylngg avatar florath avatar fschlich avatar iceman91176 avatar jenningsloy318 avatar mahuihuang avatar mjavier2k avatar sbates130272 avatar smiche avatar xflipped avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

redfish_exporter's Issues

Missing support for power control metrics

Hi,

Supermicro servers export power usage in some models as part of Power Control status.

Something like this might be enough to surface the interesting metrics:

diff --git a/collector/chassis_collector.go b/collector/chassis_collector.go
index 6319cd7..b99aff9 100755
--- a/collector/chassis_collector.go
+++ b/collector/chassis_collector.go
@@ -94,6 +94,14 @@ var (
                                nil,
                        ),
                },
+               "chassis_power_average_consumed_watts": {
+                       desc: prometheus.NewDesc(
+                               prometheus.BuildFQName(namespace, ChassisSubsystem, "power_average_consumed_watts"),
+                               "power wattage watts number of chassis component",
+                               ChassisPowerVotageLabelNames,
+                               nil,
+                       ),
+               },
                "chassis_power_powersupply_state": {
                        desc: prometheus.NewDesc(
                                prometheus.BuildFQName(namespace, ChassisSubsystem, "power_powersupply_state"),
@@ -251,16 +259,22 @@ func (c *ChassisCollector) Collect(ch chan<- prometheus.Metric) {
                                wg3.Add(len(chassisPowerInfoVoltages))
                                for _, chassisPowerInfoVoltage := range chassisPowerInfoVoltages {
                                        go parseChassisPowerInfoVoltage(ch, chassisID, chassisPowerInfoVoltage, wg3)
+                               }

+                               // power control
+                               chassisPowerInfoPowerControls := chassisPowerInfo.PowerControl
+                               wg4 := &sync.WaitGroup{}
+                               wg4.Add(len(chassisPowerInfoPowerControls))
+                               for _, chassisPowerInfoPowerControl := range chassisPowerInfoPowerControls {
+                                       go parseChassisPowerInfoPowerControl(ch, chassisID, chassisPowerInfoPowerControl, wg4)
                                }

                                // powerSupply
                                chassisPowerInfoPowerSupplies := chassisPowerInfo.PowerSupplies
-                               wg4 := &sync.WaitGroup{}
-                               wg4.Add(len(chassisPowerInfoPowerSupplies))
+                               wg5 := &sync.WaitGroup{}
+                               wg5.Add(len(chassisPowerInfoPowerSupplies))
                                for _, chassisPowerInfoPowerSupply := range chassisPowerInfoPowerSupplies {
-
-                                       go parseChassisPowerInfoPowerSupply(ch, chassisID, chassisPowerInfoPowerSupply, wg4)
+                                       go parseChassisPowerInfoPowerSupply(ch, chassisID, chassisPowerInfoPowerSupply, wg5)
                                }
                        }

@@ -331,13 +345,22 @@ func parseChassisPowerInfoVoltage(ch chan<- prometheus.Metric, chassisID string,
        chassisPowerInfoVoltageID := chassisPowerInfoVoltage.MemberID
        chassisPowerInfoVoltageNameReadingVolts := chassisPowerInfoVoltage.ReadingVolts
        chassisPowerInfoVoltageState := chassisPowerInfoVoltage.Status.State
-       chassisPowerVotageLabelvalues := []string{"power_votage", chassisID, chassisPowerInfoVoltageName, chassisPowerInfoVoltageID}
+       chassisPowerVotageLabelvalues := []string{"power_voltage", chassisID, chassisPowerInfoVoltageName, chassisPowerInfoVoltageID}
        if chassisPowerInfoVoltageStateValue, ok := parseCommonStatusState(chassisPowerInfoVoltageState); ok {
                ch <- prometheus.MustNewConstMetric(chassisMetrics["chassis_power_voltage_state"].desc, prometheus.GaugeValue, chassisPowerInfoVoltageStateValue, chassisPowerVotageLabelvalues...)
        }
        ch <- prometheus.MustNewConstMetric(chassisMetrics["chassis_power_voltage_volts"].desc, prometheus.GaugeValue, float64(chassisPowerInfoVoltageNameReadingVolts), chassisPowerVotageLabelvalues...)
 }

+func parseChassisPowerInfoPowerControl(ch chan<- prometheus.Metric, chassisID string, chassisPowerInfoPowerControl redfish.PowerControl, wg *sync.WaitGroup) {
+       defer wg.Done()
+       name := chassisPowerInfoPowerControl.Name
+       id := chassisPowerInfoPowerControl.MemberID
+       pm := chassisPowerInfoPowerControl.PowerMetrics
+       chassisPowerVotageLabelvalues := []string{"power_wattage", chassisID, name, id}
+       ch <- prometheus.MustNewConstMetric(chassisMetrics["chassis_power_average_consumed_watts"].desc, prometheus.GaugeValue, float64(pm.AverageConsumedWatts), chassisPowerVotageLabelvalues...)
+}
+
 func parseChassisPowerInfoPowerSupply(ch chan<- prometheus.Metric, chassisID string, chassisPowerInfoPowerSupply redfish.PowerSupply, wg *sync.WaitGroup) {

        defer wg.Done()

My apologies for some other modifications in the patch, it's from my local fork.

redfish_exporter error when using resolve to convert hostname to ip address.

If we use a hostname in the redfish.yml configuration file the hostname->IP works as expected but then the lookup on the credentials appears to use the IP address and not the hostname to search for the security credentials. For example

hosts:
  asgard-ipmi:
    username: user
    password: pass

errors with the message

FATA[0000] Error getting credentialfor target 192.168.11.126 file: no credentials found for target 192.168.11.126  source="main.go:45"

Now 192.168.11.126 is the correct IP for asgard-ipmi in our network. So as explained above the name resolution works but the wrong key is used to index the credential information?

got no metrics from Dell IDRAC due to duplicate devices

Hi,
like in
#57

we get with your latest version and iDRAC firmware version 6.00.02.00 no metrics, but just errors

An error has occurred while serving metrics:

14 error(s) occurred:
* [from Gatherer #2] collected metric "redfish_system_pcie_device_state" { label:<name:"hostname" value:"" > label:<name:"pcie_device" value:"LPe31002-M6-D 2-Port 16Gb Fibre Channel Adapter" > label:<name:"pcie_device_id" value:"96-0" > label:<name:"pcie_device_partnumber" value:"0RXNT1" > label:<name:"pcie_device_type" value:"MultiFunction," > label:<name:"pcie_serial_number" value:"...." > label:<name:"resource" value:"pcie_device" > gauge:<value:1 > } was collected before with the same name and label values
* [from Gatherer #2] collected metric "redfish_system_pcie_device_health_state" { label:<name:"hostname" value:"" > label:<name:"pcie_device" value:"LPe31002-M6-D 2-Port 16Gb Fibre Channel Adapter" > label:<name:"pcie_device_id" value:"96-0" > label:<name:"pcie_device_partnumber" value:"0RXNT1" > label:<name:"pcie_device_type" value:"MultiFunction," > label:<name:"pcie_serial_number" value:"...." > label:<name:"resource" value:"pcie_device" > gauge:<value:1 > } was collected before with the same name and label values
...

Our devices list contains duplicates:

curl -k -u ... https://.../redfish/v1/Systems/System.Embedded.1
{
...
"Model":"PowerEdge R740",
"Name":"System",
"NetworkInterfaces":{"@odata.id":"/redfish/v1/Systems/System.Embedded.1/NetworkInterfaces"},
"Oem":{"Dell":{"@odata.type":"#DellOem.v1_3_0.DellOemResources",
"DellSystem":{"BIOSReleaseDate":"12/13/2021",
....
"@odata.type":"#DellSystem.v1_3_0.DellSystem",
"@odata.id":"/redfish/v1/Systems/System.Embedded.1/Oem/Dell/DellSystem/System.Embedded.1"}}},
"PCIeDevices":[{"@odata.id":"/redfish/v1/Systems/System.Embedded.1/PCIeDevices/0-31"},
{"@odata.id":"/redfish/v1/Systems/System.Embedded.1/PCIeDevices/96-0"},
{"@odata.id":"/redfish/v1/Systems/System.Embedded.1/PCIeDevices/96-0"},
{"@odata.id":"/redfish/v1/Systems/System.Embedded.1/PCIeDevices/94-0"},
...

Would it be possible to remove the duplicates from the device list?

Tagging new releases?

Hey there @jenningsloy318 ,

We've been using this exporter and finding it very useful. However, in order to get it working correctly we needed to build it from the source from the latest code to get all bugfixes and enhancements.

Do you have plans to keep tagging releases to keep them up to date with recent patches and development?

cannot unmarshal number into Go struct field .Socket of type string operation=system.Processors()

2021/02/04 15:02:24 info collector scrape started Chassis=RootService app=redfish_exporter collector=ChassisCollector target=xxxxxx
2021/02/04 15:02:24 info no thermal data found Chassis=RootService app=redfish_exporter collector=ChassisCollector operation=chassis.Thermal() target=1xxxxx
2021/02/04 15:02:24 info no power data found Chassis=RootService app=redfish_exporter collector=ChassisCollector operation=chassis.Power() target=xxxxx
2021/02/04 15:02:25 info collector scrape started Manager=1 app=redfish_exporter collector=ManagerCollector target=xxxxx
2021/02/04 15:02:25 info collector scrape completed Manager=1 app=redfish_exporter collector=ManagerCollector target=xxxxx
2021/02/04 15:02:25 info collector scrape started System=1 app=redfish_exporter collector=SystemCollector target=xxxx
2021/02/04 15:02:25 info no network adapters data found Chassis=RootService app=redfish_exporter collector=ChassisCollector operation=chassis.NetworkAdapters() target=
2021/02/04 15:02:25 info collector scrape completed Chassis=RootService app=redfish_exporter collector=ChassisCollector target=xxxxx
2021/02/04 15:02:28 error error getting processor data from system System=1 app=redfish_exporter collector=SystemCollector error=json: cannot unmarshal number into Go struct field .Socket of type string operation=system.Processors() target=

Ethernet metrics are misspelled

Hi,

Thanks for your work.

I noticed that the metrics for system network interfaces are named etherenet and not ethernet.
I.e. this metric: redfish_system_etherenet_interface_link_enabled.

I believe a patch like this should solve it:

diff --git a/collector/system_collector.go b/collector/system_collector.go
index 47623ff..5d60e95 100755
--- a/collector/system_collector.go
+++ b/collector/system_collector.go
@@ -252,33 +252,33 @@ var (
                                nil,
                        ),
                },
-               "system_etherenet_interface_state": {
+               "system_ethernet_interface_state": {
                        desc: prometheus.NewDesc(
-                               prometheus.BuildFQName(namespace, SystemSubsystem, "etherenet_interface_state"),
+                               prometheus.BuildFQName(namespace, SystemSubsystem, "ethernet_interface_state"),
                                "system ethernet interface state,1(Enabled),2(Disabled),3(StandbyOffinline),4(StandbySpare),5(InTest),6(Starting),7(Absent),8(UnavailableOffline),9(Deferring),10(Quiesced),11(Updating)",
                                SystemEthernetInterfaceLabelNames,
                                nil,
                        ),
                },
-               "system_etherenet_interface_health_state": {
+               "system_ethernet_interface_health_state": {
                        desc: prometheus.NewDesc(
-                               prometheus.BuildFQName(namespace, SystemSubsystem, "etherenet_interface_health_state"),
+                               prometheus.BuildFQName(namespace, SystemSubsystem, "ethernet_interface_health_state"),
                                "system ethernet interface health state,1(OK),2(Warning),3(Critical)",
                                SystemEthernetInterfaceLabelNames,
                                nil,
                        ),
                },
-               "system_etherenet_interface_link_status": {
+               "system_ethernet_interface_link_status": {
                        desc: prometheus.NewDesc(
-                               prometheus.BuildFQName(namespace, SystemSubsystem, "etherenet_interface_link_status"),
+                               prometheus.BuildFQName(namespace, SystemSubsystem, "ethernet_interface_link_status"),
                                "system ethernet interface link status๏ผŒ1(LinkUp),2(NoLink),3(LinkDown)",
                                SystemEthernetInterfaceLabelNames,
                                nil,
                        ),
                },
-               "system_etherenet_interface_link_enabled": {
+               "system_ethernet_interface_link_enabled": {
                        desc: prometheus.NewDesc(
-                               prometheus.BuildFQName(namespace, SystemSubsystem, "etherenet_interface_link_enabled"),
+                               prometheus.BuildFQName(namespace, SystemSubsystem, "ethernet_interface_link_enabled"),
                                "system ethernet interface if the link is enabled",
                                SystemEthernetInterfaceLabelNames,
                                nil,
@@ -642,18 +642,18 @@ func parseEthernetInterface(ch chan<- prometheus.Metric, systemHostName string,
        ethernetInterfaceHealthState := ethernetInterface.Status.Health
        systemEthernetInterfaceLabelValues := []string{systemHostName, "ethernet_interface", ethernetInterfaceName, ethernetInterfaceID, ethernetInterfaceSpeed}
        if ethernetInterfaceStateValue, ok := parseCommonStatusState(ethernetInterfaceState); ok {
-               ch <- prometheus.MustNewConstMetric(systemMetrics["system_etherenet_interface_state"].desc, prometheus.GaugeValue, ethernetInterfaceStateValue, systemEthernetInterfaceLabelValues...)
+               ch <- prometheus.MustNewConstMetric(systemMetrics["system_ethernet_interface_state"].desc, prometheus.GaugeValue, ethernetInterfaceStateValue, systemEthernetInterfaceLabelValues...)

        }
        if ethernetInterfaceHealthStateValue, ok := parseCommonStatusHealth(ethernetInterfaceHealthState); ok {
-               ch <- prometheus.MustNewConstMetric(systemMetrics["system_etherenet_interface_health_state"].desc, prometheus.GaugeValue, ethernetInterfaceHealthStateValue, systemEthernetInterfaceLabelValues...)
+               ch <- prometheus.MustNewConstMetric(systemMetrics["system_ethernet_interface_health_state"].desc, prometheus.GaugeValue, ethernetInterfaceHealthStateValue, systemEthernetInterfaceLabelValues...)
        }
        if ethernetInterfaceLinkStatusValue, ok := parseLinkStatus(ethernetInterfaceLinkStatus); ok {

-               ch <- prometheus.MustNewConstMetric(systemMetrics["system_etherenet_interface_link_status"].desc, prometheus.GaugeValue, ethernetInterfaceLinkStatusValue, systemEthernetInterfaceLabelValues...)
+               ch <- prometheus.MustNewConstMetric(systemMetrics["system_ethernet_interface_link_status"].desc, prometheus.GaugeValue, ethernetInterfaceLinkStatusValue, systemEthernetInterfaceLabelValues...)

        }

-       ch <- prometheus.MustNewConstMetric(systemMetrics["system_etherenet_interface_link_enabled"].desc, prometheus.GaugeValue, boolToFloat64(ethernetInterfaceEnabled), systemEthernetInterfaceLabelValues...)
+       ch <- prometheus.MustNewConstMetric(systemMetrics["system_ethernet_interface_link_enabled"].desc, prometheus.GaugeValue, boolToFloat64(ethernetInterfaceEnabled), systemEthernetInterfaceLabelValues...)

 }

memory: json: cannot unmarshal object into Go struct field .Location of type string

To not clutter #4 too much, I will split the issues to separate tickets.

When querying memory metrics from redfish, I get an error INFO[0008] Errors Getting memory from computer system : json: cannot unmarshal object into Go struct field .Location of type string source="system_collector.go:386".

Redfish output for empyt slot

{
    "AllowedSpeedsMHz": [],
    "VolatileRegionSizeLimitMiB": null,
    "MemoryDeviceType": null,
    "Id": "2",
    "MemorySubsystemControllerProductID": null,
    "Links": {
        "Chassis": {
            "@odata.id": "/redfish/v1/Chassis/1"
        }
    },
    "MemoryMedia": [],
    "PartNumber": null,
    "[email protected]": "The property is deprecated. Please use ModuleProductID instead.",
    "MemoryLocation": {
        "Channel": 2,
        "MemoryController": 0,
        "Slot": 2,
        "Socket": 1
    },
    "MemorySubsystemControllerManufacturerID": null,
    "MemoryType": null,
    "DeviceLocator": null,
    "Name": "DIMM 2",
    "Oem": {
        "Lenovo": {
            "@odata.type": "#LenovoMemory.v1_0_0.LenovoMemory",
            "FruPartNumber": null
        }
    },
    "@odata.type": "#Memory.v1_7_1.Memory",
    "RankCount": null,
    "BaseModuleType": null,
    "DeviceID": null,
    "VendorID": null,
    "Regions": [],
    "ModuleProductID": null,
    "@odata.id": "/redfish/v1/Systems/1/Memory/2",
    "OperatingSpeedMhz": null,
    "SerialNumber": null,
    "[email protected]": "The property is deprecated. Please use MemorySubsystemControllerProductID instead.",
    "CapacityMiB": null,
    "Description": "This resource is used to represent a memory for a Redfish implementation.",
    "BusWidthBits": null,
    "Manufacturer": null,
    "SubsystemDeviceID": null,
    "OperatingMemoryModes": [],
    "Status": {
        "State": "Absent",
        "Health": null
    },
    "DataWidthBits": null,
    "SecurityCapabilities": {},
    "ModuleManufacturerID": null,
    "[email protected]": "The property is deprecated. Please use MemorySubsystemControllerManufacturerID instead.",
    "SubsystemVendorID": null,
    "Location": {
        "PartLocation": {
            "LocationType": "Slot",
            "ServiceLabel": "DIMM 2",
            "LocationOrdinalValue": 1
        }
    },
    "PersistentRegionSizeLimitMiB": null,
    "@odata.etag": "\"d9b7eb2d8ff48aed4e2bece427f6a77f\"",
    "[email protected]": "The property is deprecated. Please use ModuleManufacturerID instead.",
    "VolatileSizeMiB": null,
    "FunctionClasses": []
}

Redfish output for populted slot:

{
    "AllowedSpeedsMHz": [
        2666
    ],
    "VolatileRegionSizeLimitMiB": null,
    "MemoryDeviceType": "DDR4",
    "Id": "5",
    "MemorySubsystemControllerProductID": "0x0000",
    "Links": {
        "Chassis": {
            "@odata.id": "/redfish/v1/Chassis/1"
        }
    },
    "MemoryMedia": [
        "DRAM"
    ],
    "PartNumber": "xxxxxxxx",
    "[email protected]": "The property is deprecated. Please use ModuleProductID instead.",
    "MemoryLocation": {
        "Channel": 0,
        "MemoryController": 0,
        "Slot": 5,
        "Socket": 1
    },
    "MemorySubsystemControllerManufacturerID": "0x0000",
    "MemoryType": "DRAM",
    "DeviceLocator": "DIMM 5",
    "Name": "DIMM 5",
    "Oem": {
        "Lenovo": {
            "@odata.type": "#LenovoMemory.v1_0_0.LenovoMemory",
            "FruPartNumber": "xxxxxx"
        }
    },
    "@odata.type": "#Memory.v1_7_1.Memory",
    "RankCount": 1,
    "BaseModuleType": "RDIMM",
    "DeviceID": "DIMM_5",
    "VendorID": "SK Hynix",
    "Regions": [],
    "SecurityCapabilities": {},
    "@odata.id": "/redfish/v1/Systems/1/Memory/5",
    "OperatingSpeedMhz": 2400,
    "SerialNumber": "xxxxxxx",
    "[email protected]": "The property is deprecated. Please use MemorySubsystemControllerProductID instead.",
    "CapacityMiB": 32768,
    "Description": "This resource is used to represent a memory for a Redfish implementation.",
    "BusWidthBits": 72,
    "Manufacturer": "SK Hynix",
    "SubsystemDeviceID": "0x0000",
    "OperatingMemoryModes": [
        "Volatile"
    ],
    "Status": {
        "State": "Enabled",
        "Health": "OK"
    },
    "DataWidthBits": 64,
    "[email protected]": "The property is deprecated. Please use MemorySubsystemControllerManufacturerID instead.",
    "ModuleManufacturerID": "0xad80",
    "Location": {
        "PartLocation": {
            "LocationType": "Slot",
            "ServiceLabel": "DIMM 5",
            "LocationOrdinalValue": 4
        }
    },
    "SubsystemVendorID": "0x0000",
    "PersistentRegionSizeLimitMiB": null,
    "@odata.etag": "\"785972fc74cf58990168a1aba42935e0\"",
    "ModuleProductID": "0x0000",
    "[email protected]": "The property is deprecated. Please use ModuleManufacturerID instead.",
    "VolatileSizeMiB": 32768,
    "FunctionClasses": [
        "Volatile"
    ]
}

The problem is in the gofish library in https://github.com/stmcginnis/gofish/blob/73f4bfdc949f2942e60cc396a70c7bedd2b95a8c/redfish/memory.go#L240. This should be extended.

Mockup from DMTF page: https://redfish.dmtf.org/redfish/mockups/v1/922#

Looking at http://redfish.dmtf.org/schemas/v1/Resource.v1_9_0.json#/definitions/Location, Location is quite a big resource. I think it could be left out and MemoryLocation should be used for memory.

Exporter crashes on SuperMicro servers

Hi,
I am trying to use exporter on SuperMicro 1114S but it crashes with error below:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0xe0 pc=0x848215]

goroutine 140 [running]:
github.com/jenningsloy318/redfish_exporter/collector.(*ChassisCollector).Collect(0xc0004ce280, 0xc000092060)
        /home/I336589/git/go/src/github.com/jenningsloy318/redfish_exporter/collector/chassis_collector.go:228 +0x895
github.com/jenningsloy318/redfish_exporter/collector.(*RedfishCollector).Collect.func1(0xc0001260d0, 0xc000092060, 0xa411a0, 0xc0004ce280)
        /home/I336589/git/go/src/github.com/jenningsloy318/redfish_exporter/collector/redfish_collector.go:90 +0x67
created by github.com/jenningsloy318/redfish_exporter/collector.(*RedfishCollector).Collect
        /home/I336589/git/go/src/github.com/jenningsloy318/redfish_exporter/collector/redfish_collector.go:88 +0x1b8

I have otherSuperMicros and it is working fine it just this model that is having this problem.

Exporter Returns target as up when there it

I've got a few devices that are not reachable from the RedFish exporter at the moment. I got them onboarded into Prometheus whilst I wait for the FW rules to be configured to allow traffic through. However I noticed that Prometheus is reporting all of the targets as up. I looked at the logs of the RedFish exporter and saw the below:

2020/12/05 13:49:30 error error creating redfish client app=redfish_exporter error=Get "https://REDACTED/redfish/v1/": dial tcp REDACTED:443: i/o timeout target=REDACTED
2020/12/05 13:49:32 error error creating redfish client app=redfish_exporter error=Get "https://REDACTED/redfish/v1/": dial tcp REDACTED:443: i/o timeout target=REDACTED

These are two separate hosts, both of which are reporting as 'Up' in Prometheus. This is is not desirable as in the event we lose one of these hosts we can not use the Prometheus up function as it'll say these hosts are up.

No fan_id for redfish_chassis_fan_* metrics

I managed to get the exporter working.

I noticed it doesn't collect fan_id in my case (Lenovo SR630). Looking at the Redfish response, this is what I get on my end (XCC 3.60):

...
        "CooledBy": [
            {
                "@odata.id": "/redfish/v1/Chassis/1/Thermal#/Fans/0"
            },

According to DMTF mockup (check Fans section), that should be correct output. But in your chassis sample you have an extra / before #. Is this a typo on your end or a bug in your Lenovo XCC version ?

There were some big changes regarding Redfish in XCC 3.00 and 3.60:

3.60: 
  - Added the support of Redfish 1.8.0 and new properties support.
  - Added the support to report Raid health

3.00:
  - Added the Redfish support of telemetry service with metric reports and SSE.
  - Added the Redfish support of 2019.1 schema and registries.
  - Added the Redfish support of firmware update with push method and enhanced the firmware update messages.
  - Added the Redfish support to get the PSU firmware inventory.
  - Added the Redfish support of IO adapter settings with Bios schema.
  - Added the Redfish support of Enclosure "Chassis" object on blade and dense systems.

How to determine system_id๏ผŸ

For example , In order to get Chassis information, we need to know the system_id of each vendor's server.
As shown below.

  • HPE server is /redfish/v1/Chassis/System.Embedded.1/
  • DELL server is /redfish/v1/Chassis/1/

Cloud you tell me how to set the system_id in you program,
or is it to determine the system_id (System.Embedded.1 or 1) automatically?

Redfish_exporter container crashes after some time it scrapes data

Hello there !

I have an issue using the exporter. Sometimes, for an unknown reason, the exporter stop scraping data.
Here is a little screenshot showing _scrape_duration_seconds{job="redfish"} :
image

All the things I know are, sometimes, the container is still running but using a very low CPU consumption and prometheus prompt a "server misbehaving" error for each redfish targets because the scrape duration as been exceeded. When I check the container logs, everything seems to be alright, the process just stop to scrape at sometime without returning any error.

Or, the container doesn't answer anymore and it can't be restarted, stopped or killed. So I must restart the docker systemctl to delete it and I can't check the container logs.

In all cases, the exporter doesn't scrape anymore...

Do somebody have an idea of what is wrong and how to fix it?

Endpoint to trigger configuration reload

Hi,

I see it is possible to trigger a configuration reload by sending a SIGHUP to the process.

I think adding an HTTP endpoint for reloading the configuration like /-/reload for prometheus would be also useful IMO.

If you agree, I could work on adding this feature myself.

PhysicalSecurity Metrics/State mixed up ?

Hi,

i might be wrong but it seems like the physical_security_sensor* metrics are mixed up.

The chassis API returns the following

...
  "PhysicalSecurity": {
    "IntrusionSensor": "Normal",
    "IntrusionSensorNumber": 115,
    "IntrusionSensorReArm": "Manual"
  },
...

which generates the following metric

redfish_chassis_physical_security_sensor_rearm_method{chassis_id="System.Embedded.1",  intrusion_sensor="Normal", intrusion_sensor_number="115", job="probe/prometheus-playground/redfish-monitoring", resource="physical_security"}  1

In this case the metric value is taken/translated from IntrusionSensorReArm.

IMHO it would be more relevant to use the IntrusionSensor - Property as Metric-Value, because this one tells waht the IntrusionState is, eg HardwareIntrusion, Normal, TamperingDetected.

The IntrusionSensorReArm-Value should rather be a label.

The feature was implemented here

Originally posted by @jenningsloy318 in #16 (comment)

redfish_exporter server dies when non-existant IP or hostname is queried.

When running redfish_exporter we can cause it to fatally error by entering an IP or hostname that does not exist in the redfish.yml configuration file. This is not a robust solution because we can't expect the client to always get things right and we can't expect IP addresses and hostnames to always be constant and always be online.

Make the application more robust and return a sane message if the client tries to scrape a target that is not in the configuration file.

vidar:~/redfish_exporter$ ./redfish_exporter --config.file=redfish-hostnames.yml --log.level=debug
INFO[0000] redfish_exporter version , build reversion , build branch , build at  on host vidar  source="main.go:69"
INFO[0000] Starting redfish_exporter                     source="main.go:76"
INFO[0000] Loaded config file                            source="config.go:42"
INFO[0000] Listening on :9610                            source="main.go:125"
INFO[0002] Scraping target 1.2.3.4                       source="main.go:40"
FATA[0002] Error getting credentialfor target 1.2.3.4 file: no credentials found for target 1.2.3.4  source="main.go:45"

Add redfish_chassis_temperature_sensor_health_state metric

Hi,

The current temperature metrics looks like

redfish_chassis_temperature_celsius{chassis_id="System.Embedded.1", instance="xxxx", job="redfish-exporter", resource="temperature",  sensor="CPU1 Temp", sensor_id="0"} 37
redfish_chassis_temperature_celsius{chassis_id="System.Embedded.1", instance="xxxx", job="redfish-exporter", resource="temperature",  sensor="CPU2 Temp", sensor_id="1"} 32
redfish_chassis_temperature_celsius{chassis_id="System.Embedded.1", instance="xxxx", job="redfish-exporter", resource="temperature",  sensor="System Board Exhaust Temp", sensor_id="4"} 30
redfish_chassis_temperature_celsius{chassis_id="System.Embedded.1", instance="xxxx", job="redfish-exporter", resource="temperature",  sensor="System Board GPU7 Temp", sensor_id="3"} 32
redfish_chassis_temperature_celsius{chassis_id="System.Embedded.1", instance="xxxx", job="redfish-exporter", resource="temperature",  sensor="System Board Inlet Temp", sensor_id="2"} 19
redfish_chassis_temperature_sensor_state{chassis_id="System.Embedded.1", instance="xxxx", job="redfish-exporter", resource="temperature",  sensor="CPU1 Temp", sensor_id="0"} 1
redfish_chassis_temperature_sensor_state{chassis_id="System.Embedded.1", instance="xxxx", job="redfish-exporter", resource="temperature",  sensor="CPU2 Temp", sensor_id="1"} 1
redfish_chassis_temperature_sensor_state{chassis_id="System.Embedded.1", instance="xxxx", job="redfish-exporter", resource="temperature",  sensor="System Board Exhaust Temp", sensor_id="4"} 1
redfish_chassis_temperature_sensor_state{chassis_id="System.Embedded.1", instance="xxxx", job="redfish-exporter", resource="temperature",  sensor="System Board GPU7 Temp", sensor_id="3"} 1
redfish_chassis_temperature_sensor_state{chassis_id="System.Embedded.1", instance="xxxx", job="redfish-exporter", resource="temperature",  sensor="System Board Inlet Temp", sensor_id="2"} 1

Note: for the test I set the Warning threshold for sensor "System Board Inlet Temp" to 17.
The only state/health metrics > 1 in this case are:

redfish_system_health_state{cluster="steyr-prod-gpu",environment="prod",instance="steyr-prod-gpu__lp05edge02008",job="redfish-exporter",node="lp05edge02008",prometheus="victoriametrics/central",resource="system",scrape_from="edge-tooling",system_id="System.Embedded.1"} 2
redfish_chassis_health{chassis_id="System.Embedded.1",cluster="steyr-prod-gpu",environment="prod",instance="steyr-prod-gpu__lp05edge02008",job="redfish-exporter",node="lp05edge02008",prometheus="victoriametrics/central",resource="chassis",scrape_from="edge-tooling"} 2

So we in this case, when can only get a unspecific Chassis alert or need to define a Alert on the redfish_chassis_temperature_celsius using separate thresholds int the alert definition, which might not match the server configurations.

But the at least for our Dell servers also a Health value is provided via:
https:///redfish/v1/Chassis/System.Embedded.1/Sensors/SystemBoardInletTemp

e.g. for

{
    "@odata.context": "/redfish/v1/$metadata#Sensor.Sensor",
    "@odata.id": "/redfish/v1/Chassis/System.Embedded.1/Sensors/SystemBoardInletTemp",
    "@odata.type": "#Sensor.v1_5_0.Sensor",
    "Name": "System Board Inlet Temp",
    "Id": "SystemBoardInletTemp",
    "Description": "Instance of Sensor Id",
    "ReadingType": "Temperature",
    "ReadingUnits": "Cel",
    "Status": {
        "Health": "Warning",
        "State": "Enabled"
    },
    "Reading": 20.0,
   ...
}

Can the redfish_exporter be extended by such a temperature health metric?

HP DL360 not returning disk metrics

I've been testing this exporter against some Dell r640 and HP DL360s, and it seems to be pretty good.

Unfortunately the HP DL360s don't return any information for at least drive data, since it seems to be under a different redfish URL:

/redfish/v1/Systems/1/SmartStorage/ArrayControllers/0/DiskDrives/0

but the Dell hardware seems to be under:

/redfish/v1/Systems/System.Embedded.1/SimpleStorage/Controllers/RAID.Integrated.1-1

I can definitely see information like memory for the HPs, so it seems like its just storage related.

What further information can I provide to figure out what is going on here?

Docker Image not working properly

I've build the Docker Image with the Dockerfile and tried to start it with the Argument config.file=...., ut it keeps failing
Failure OCI runtime create failed: container_linux.go:344: starting container process caused "exec: \"--config.file=/etc/redfish_exporter/redfish_exporter.yml\": stat --config.file=/etc/redfish_exporter/redfish_exporter.yml: no such file or directory": unknown
I've bind the volume with the config file to the right place (can find it in the interactive terminal)

When i dont pass the config nothing happens at all (no error, no running process, nothing) the only thing that appears in the log is:
๏ฟฝ]0;@990ef5db40b5:/๏ฟฝ[root@990ef5db40b5 /]#

I also didnt find an entry point (executable with name ~redfish_exporter) in the image, but this could be my fault.

Has anbody else has similar Problems?
redfish_exporter_dockerconfig.txt

offer help in maintaining this prometheus exporter

Hi,

We are a new user of this Prometheus exporter and would like to offer help in maintaining and enhancing this exporter if needed. We intent to make great use of this exporter as it solves a big pain point for us

Thanks,
Marvin

Unable to create Docker image

Hi can you please help me to create docker image.
Please share proper steps.

getting below error...

Complete!
fatal: not a git repository (or any parent up to mount point /go/src/github.com/jenningsloy318)
Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set).
fatal: not a git repository (or any parent up to mount point /go/src/github.com/jenningsloy318)
Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set).

Absent blades are trying to be scraped and not getting health checked

I am having an issue with absent blades still trying to be scraped. I have a super micro blade chassis that is missing some blades and when I try and scrape the chassis I get the following error messages:

2 error(s) occurred:
* [from Gather #2] collected metric "redfish_system_total_processor_count" { label:<name:"hostname" value:"" > label:<name:"resource" value:"system"  > label:<name:"resource" value:"system: > label:<name:"system_id" value:"" > gauge:<value:0 > } was collected before with the same name and label values
* [from Gather #2] collected metric "redfish_system_total_memory_size" { label:<name:"hostname" value:"" > label:<name:"resource" value:"system"  > label:<name:"resource" value:"system: > label:<name:"system_id" value:"" > gauge:<value:0 > } was collected before with the same name and label values

In looking through the code I see the system collector seems to run against both of these without checking the health of the system, which is absent. The other metrics in that area check the health of the system. Does this seem to track? If so I can submit a PR to fix this:

ch <- prometheus.MustNewConstMetric(s.metrics["system_total_processor_count"].desc, prometheus.GaugeValue, float64(systemTotalProcessorCount), systemLabelValues...)

ch <- prometheus.MustNewConstMetric(s.metrics["system_total_memory_size"].desc, prometheus.GaugeValue, float64(systemTotalMemoryAmount), systemLabelValues...)

processors: json: cannot unmarshal object into Go struct field .Metrics of type string

While running the exporter, I also get the following error: Errors Getting Processors from system: json: cannot unmarshal object into Go struct field .Metrics of type string source="system_collector.go:403"

Processors .Metrics field in my case (Lenovo SR630) is not a string:

{
    "ProcessorArchitecture": "x86",
    "Metrics": {
        "@odata.id": "/redfish/v1/Systems/1/Processors/1/ProcessorMetrics"
    },
...

Dell server doesn't even report Metrics resource.

Error creating redfish client app=redfish_exporter

``Hi. I am stuck.

Goal = Trying to use the redfish_exporter to load sensor data from BMC via redfish and track that sensor data in Prometheus.

Setup =

  • Prometheus service running on CentOS host with IP 10.219.111.125.
  • This is a CentOS Linux release 8.4.2105
  • That host's BMC IP is 10.219.111.126. This is where i want to send redfish commands to load up sensor data.
  • The redfish_exporter's config yml file lists 10.219.111.126 , and includes the username and password to enter the BMC.
  • I am able to point the host machine's browser to 10.219.111.125:9090 and it loads Prometheus
  • I am able to point the host machine's browser to 10.219.111.125:9610 and it shows the redfish_exporter UI
  • In that redfish_exporter UI , i can enter 10.219.111.126 and it loads up data, but i noticed it say "redfish_up 0"

Problem = I keep seeing the following error
error creating redfish client app=redfish_exporter error=Get "https://10.219.111.125/redfish/v1": proxyconnect tcp: tls: first record does not look like a TLS handshake target=10.219.111.125

Prometheus is running as a service. In /etc/prometheus/prometheus.yml i have the following section...

  - job_name: 'redfish-exporter'

    # metrics_path defaults to '/metrics'
    metrics_path: /redfish

    # scheme defaults to 'http'.

    static_configs:
    - targets:
       - 10.219.111.125
    relabel_configs:
      - source_labels: [__address__]
        target_label: __param_target
      - source_labels: [__param_target]
        target_label: instance
      - target_label: __address__
        replacement: localhost:9610  ### the address of the redfish-exporter address
      # (optional) when using group config add this to have group=my_group_name
      # target_label: __param_group
       # replacement: my_group_name

What am i doing wrong?
Thank you for your help!

performance degradation with gofish 1.14 and added redfish_manager_log metrics

Hi,

we have seen a performance degradation for the exporter due to the introduction of redfish_manager_log and gofish upgrade to 1.14
from ca. 50seconds to over 2m.

Disabling the redfish_manager_log metrics gained 30seconds.
Is it possible to make the log metrics optional, as these are also quite a lot?

The main degradation is probably manly due to the concurrency changes and the default concurrency being 1.
see stmcginnis/gofish#261

See have tested with increasing the concurrency in
https://github.com/jenningsloy318/redfish_exporter/blob/master/collector/redfish_collector.go#L108

config := gofish.ClientConfig{
		Endpoint: url,
		Username: username,
		Password: password,
		Insecure: true,
		Endpoint:              url,
		Username:              username,
		Password:              password,
		Insecure:              true,
		MaxConcurrentRequests: 15,

Note: this need the latest gofish release 1.15.

With this we are back to the old runtimes.
We are still testing with concurrent calls of the redfish exporter.

Kind Regards,
Ulrike

metrics missing due to numbers with decimal place

I'm seeing the following error message for some RAID Storage devices:

2023/02/28 09:32:35 error error getting storage data from system System=System.Embedded.1 app=redfish_exporter collector=SystemCollector error=failed to retrieve some items: [{"link":"/redfish/v1/Systems/System.Embedded.1/Storage/RAID.Slot.6-1","error":"json: cannot unmarshal number 12.0 into Go struct field .StorageControllers of type int"}] operation=system.Storage()

Is is possible to change the expected type to decimal?

collected metric collected before with the same name and label values

I have been looking for a way to get metrics from HPE servers into Prometheus.

I had hoped that redfish_exporter might just work "out of the box", but unfortunately when I try to initiate a "scrape", it errors:

An error has occurred while serving metrics:

4 error(s) occurred:
* [from Gatherer #2] collected metric "redfish_chassis_power_powersupply_state" { label:<name:"chassis_id" value:"1" > label:<name:"power_supply" value:"HpServerPowerSupply" > label:<name:"power_supply_id" value:"" > label:<name:"resource" value:"power_supply" > gauge:<value:1 > } was collected before with the same name and label values
* [from Gatherer #2] collected metric "redfish_chassis_power_powersupply_health" { label:<name:"chassis_id" value:"1" > label:<name:"power_supply" value:"HpServerPowerSupply" > label:<name:"power_supply_id" value:"" > label:<name:"resource" value:"power_supply" > gauge:<value:1 > } was collected before with the same name and label values
* [from Gatherer #2] collected metric "redfish_chassis_power_powersupply_last_power_output_watts" { label:<name:"chassis_id" value:"1" > label:<name:"power_supply" value:"HpServerPowerSupply" > label:<name:"power_supply_id" value:"" > label:<name:"resource" value:"power_supply" > gauge:<value:258 > } was collected before with the same name and label values
* [from Gatherer #2] collected metric "redfish_chassis_power_powersupply_power_capacity_watts" { label:<name:"chassis_id" value:"1" > label:<name:"power_supply" value:"HpServerPowerSupply" > label:<name:"power_supply_id" value:"" > label:<name:"resource" value:"power_supply" > gauge:<value:460 > } was collected before with the same name and label values

Any idea on what it might take to make this work?

If it helps, when I manually query /redfish/v1/Chassis/1/Power/, this is what's returned (after being "pretty-printed" with jq so it's not just one long line):

{
  "@odata.context": "/redfish/v1/$metadata#Chassis/Members/1/Power$entity",
  "@odata.id": "/redfish/v1/Chassis/1/Power/",
  "@odata.type": "#Power.1.0.1.Power",
  "Id": "Power",
  "Name": "PowerMetrics",
  "Oem": {
    "Hp": {
      "@odata.type": "#HpPowerMetricsExt.1.2.0.HpPowerMetricsExt",
      "SNMPPowerThresholdAlert": {
        "DurationInMin": 0,
        "ThresholdWatts": 0,
        "Trigger": "Disabled"
      },
      "Type": "HpPowerMetricsExt.1.2.0",
      "links": {
        "FastPowerMeter": {
          "href": "/redfish/v1/Chassis/1/Power/FastPowerMeter/"
        },
        "FederatedGroupCapping": {
          "href": "/redfish/v1/Chassis/1/Power/FederatedGroupCapping/"
        },
        "PowerMeter": {
          "href": "/redfish/v1/Chassis/1/Power/PowerMeter/"
        }
      }
    }
  },
  "PowerCapacityWatts": 920,
  "PowerConsumedWatts": 268,
  "PowerControl": [
    {
      "PowerCapacityWatts": 920,
      "PowerConsumedWatts": 268,
      "PowerLimit": {
        "LimitInWatts": null
      },
      "PowerMetrics": {
        "AverageConsumedWatts": 265,
        "IntervalInMin": 20,
        "MaxConsumedWatts": 331,
        "MinConsumedWatts": 263
      }
    }
  ],
  "PowerLimit": {
    "LimitInWatts": null
  },
  "PowerMetrics": {
    "AverageConsumedWatts": 265,
    "IntervalInMin": 20,
    "MaxConsumedWatts": 331,
    "MinConsumedWatts": 263
  },
  "PowerSupplies": [
    {
      "FirmwareVersion": "1.00",
      "LastPowerOutputWatts": 263,
      "LineInputVoltage": 121,
      "LineInputVoltageType": "ACMidLine",
      "Model": "656362-B21",
      "Name": "HpServerPowerSupply",
      "Oem": {
        "Hp": {
          "@odata.type": "#HpServerPowerSupply.1.0.0.HpServerPowerSupply",
          "AveragePowerOutputWatts": 263,
          "BayNumber": 1,
          "HotplugCapable": true,
          "MaxPowerOutputWatts": 302,
          "Mismatched": false,
          "PowerSupplyStatus": {
            "State": "Ok"
          },
          "Type": "HpServerPowerSupply.1.0.0",
          "iPDU": {
            "Id": "1",
            "Model": "",
            "SerialNumber": "",
            "iPDUStatus": {
              "State": "Unknown"
            }
          },
          "iPDUCapable": true
        }
      },
      "PowerCapacityWatts": 460,
      "PowerSupplyType": "AC",
      "SerialNumber": "XXXXXXXXXXXXXX",
      "SparePartNumber": "660184-001",
      "Status": {
        "Health": "OK",
        "State": "Enabled"
      }
    },
    {
      "FirmwareVersion": "1.00",
      "LastPowerOutputWatts": 5,
      "LineInputVoltage": 121,
      "LineInputVoltageType": "ACMidLine",
      "Model": "656362-B21",
      "Name": "HpServerPowerSupply",
      "Oem": {
        "Hp": {
          "@odata.type": "#HpServerPowerSupply.1.0.0.HpServerPowerSupply",
          "AveragePowerOutputWatts": 5,
          "BayNumber": 2,
          "HotplugCapable": true,
          "MaxPowerOutputWatts": 5,
          "Mismatched": false,
          "PowerSupplyStatus": {
            "State": "Ok"
          },
          "Type": "HpServerPowerSupply.1.0.0",
          "iPDU": {
            "Id": "2",
            "Model": "",
            "SerialNumber": "",
            "iPDUStatus": {
              "State": "Unknown"
            }
          },
          "iPDUCapable": true
        }
      },
      "PowerCapacityWatts": 460,
      "PowerSupplyType": "AC",
      "SerialNumber": "XXXXXXXXXXXXXX",
      "SparePartNumber": "660184-001",
      "Status": {
        "Health": "OK",
        "State": "Enabled"
      }
    }
  ],
  "Redundancy": [
    {
      "MaxNumSupported": 2,
      "MemberId": "0",
      "MinNumNeeded": 2,
      "Mode": "Failover",
      "Name": "PowerSupply Redundancy Group 1",
      "RedundancySet": [
        {
          "@odata.id": "/redfish/v1/Chassis/1/Power#/PowerSupplies/0"
        },
        {
          "@odata.id": "/redfish/v1/Chassis/1/Power#/PowerSupplies/1"
        }
      ]
    }
  ],
  "Type": "PowerMetrics.0.11.0",
  "links": {
    "self": {
      "href": "/redfish/v1/Chassis/1/Power/"
    }
  }
}

Error getting credentialfor target 10.10.10.2 file: no credentials found for target 10.10.10.2

I'm getting the error in the title and I don't know what I'm doing wrong.

My config.yaml looks like:

hosts:
  10.10.10.2:
    username: monitor
    password: bbccbbcc

and I start the exporter with ./redfish_exporter --config.file="/root/config.yaml". I then call http://home.example.org:9610/redfish?target=10.10.10.2 and get the error in the title in the console.

INFO[0000] redfish_exporter version 0.10.1, build reversion 56a0ceab10f9632997f1c4b6b08c51c7e80ab94d, build branch master, build at Fri Oct 25 12:24:23 DST 2019 on host CTUN50947963A    source="main.go:65"
INFO[0000] Starting redfish_exporter                     source="main.go:72"
INFO[0000] Loaded config file                            source="config.go:42"
INFO[0000] Listening on :9610                            source="main.go:97"
INFO[0005] Scraping target 10.10.10.2               source="main.go:37"
FATA[0005] Error getting credentialfor target 10.10.10.2 file: no credentials found for target 10.10.10.2  source="main.go:42"

What am I doing wrong? I noticed the same issue already closed but no solution.

Thanks, Matej

Reading Units for FANs are not always percentages

The collector treats FAN-Reading automatically as percentage, and creates an metric named redfish_chassis_fan_rpm_percentage

prometheus.BuildFQName(namespace, ChassisSubsystem, "fan_rpm_percentage"),

But not all systems return percentage-values. In our case with Dell PowerEdge Servers an absolute value is being returned.

    {
      "@odata.id": "/redfish/v1/Chassis/System.Embedded.1/Sensors/Fans/0x17||Fan.Embedded.5B",
      "FanName": "System Board Fan5B",
      "LowerThresholdCritical": 480,
      "LowerThresholdFatal": 480,
      "LowerThresholdNonCritical": 840,
      "MaxReadingRange": 197,
      "MemberId": "0x17||Fan.Embedded.5B",
      "MinReadingRange": 139,
      "Name": "System Board Fan5B",
      "PhysicalContext": "SystemBoard",
      "Reading": 4320,
      "ReadingUnits": "RPM",
      "Redundancy": [],
      "[email protected]": 0,
      "RelatedItem": [
        {
          "@odata.id": "/redfish/v1/Chassis/System.Embedded.1"
        }
      ],
      "[email protected]": 1,
      "Status": {
        "Health": "OK",
        "State": "Enabled"
      },
      "UpperThresholdCritical": null,
      "UpperThresholdFatal": null,
      "UpperThresholdNonCritical": null
    }

The ReadingUnits - property is the key here. In our case it is RPM, other systems use "Percent".

So it might make sense to return different metrics depending on the value of ReadingUnits ?
What do you think ?

Need more clarification on redfish_exporter setup with Prometheus

Hi, it is not clear how Prometheus and redfish_exporter communicate together. I believe this is done via prometheus.yml. As per README, i see we need this section.

  - job_name: 'redfish-exporter'

    # metrics_path defaults to '/metrics'
    metrics_path: /redfish

    # scheme defaults to 'http'.

    static_configs:
    - targets:
       - 10.36.48.24
    relabel_configs:
      - source_labels: [__address__]
        target_label: __param_target
      - source_labels: [__param_target]
        target_label: instance
      - target_label: __address__
        replacement: localhost:9610  ### the address of the redfish-exporter address
      # (optional) when using group config add this to have group=my_group_name
      - target_label: __param_group
        replacement: my_group_name

What parts here are variables that i need to fill in other than the target ip?

Note that I can successfully call...
http://10.219.111.125:9610/redfish?target=10.219.111.126&group=

This seems to scrape the target successfully and returns redfish_up 1, but Prometheus is showing redfish_up = 0 every time! Please help!

Occure Unexpected Panic Error( HPE Server)

While exceuting redfish_exporter, there are panic error in over 10,000 HPE nodes in my infra.

  • exporter log
2023/01/02 12:50:59  info scraping target host      app=redfish_exporter target=192.168.22.175
2023/01/02 12:50:59  info no PCI-E device data found System=1 app=redfish_exporter collector=SystemCollector operation=system.PCIeDevices() target=192.230.169.59
2023/01/02 12:50:59  info collector scrape completed Chassis=1 app=redfish_exporter collector=ChassisCollector target=192.230.164.41
2023/01/02 12:50:59  info scraping target host      app=redfish_exporter target=192.230.123.45
2023/01/02 12:50:59  info collector scrape completed Manager=1 app=redfish_exporter collector=ManagerCollector target=192.230.178.144
panic: send on closed channel

goroutine 351152 [running]:
github.com/jenningsloy318/redfish_exporter/collector.parseEthernetInterface(0xc016200840, 0xc01bc7e878, 0x8, 0xc01ca5f600, 0xc0463ca040)
	/go/src/github.com/jenningsloy318/redfish_exporter/collector/system_collector.go:684 +0x465
created by github.com/jenningsloy318/redfish_exporter/collector.(*SystemCollector).Collect
	/go/src/github.com/jenningsloy318/redfish_exporter/collector/system_collector.go:532 +0xbdf
  • redfish_exporter.yml
hosts:
  0.0.0.0:
    username:admin
    username:admin
...
groups:
  redfish_hpe:
    username:admin
    username:admin
  • prometheus.yml
 # my global config
global:
  scrape_interval: 60s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 60s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).


# Alertmanager configuration
alerting:
  alertmanagers:
    - static_configs:
        - targets:
           - localhost:9093

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  - 'rules.yml'
  # - "first_rules.yml"
  # - "second_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: "prometheus"
    static_configs:
      - targets: ["localhost:9090"]

  - job_name: grafana
    static_configs:
      - targets: ["localhost:3000"]

  - job_name: node
    static_configs:
      - targets: ["localhost:9100"]

  - job_name: redfish_exporter
    static_configs:
      - targets: ["localhost:9610"]


  - job_name: "redfish_hpe"
    scrape_interval: 5m
    scrape_timeout: 2m
    file_sd_configs:
      - files :
        - ./target/hpe.json
    metrics_path: /redfish
    relabel_configs:
      - source_labels: [__address__]
        target_label: __param_target
      - source_labels: [__param_target]
        target_label: instance
      - target_label: __address__
        replacement: {my_ipv4}:9610
      - target_label: __param_group
        replacement: redfish_hpe

Are there any ideas solve this problem?
I have no idea to debug this problem because of sudden panic without specific log.

Additionally, Do you have any data about limit scales by this exporter?

Thank you!

Failed to Get Redfish Data by Group's Auth.

Hello, I'm testing redfish-exporter in prometheus.
I want to consist redfish-exporter.yml file by grouping because operate several vendors(hpe,dell,lenovo...).
so, I consist 2 config file(redfish-exporter.yml, prometheus.yml) like that.

  • redfish_exporter.yml

groups:
redfish_dell:
username: dell_user
password: dell_pw
redifsh_lenovo:
username: lenovo_user
password: lenovo_user

  • prometheus.yml

    • job_name: "redfish-exporter"
      metrics_path: /redfish
      scrape_interval: 5m
      scrape_timeout: 2m
      static_configs:
      • targets : ['192.168.1.1']
        relabel_configs:
      • source_labels: [address]
        target_label: __param_target
      • source_labels: [__param_target]
        target_label: instance
      • target_label: address
        replacement: localhost:9610
      • target_label: __param_group
        replacement: redfish_dell

Unfortunately, I got a Failed Log about no Credentials Error.
(error error getting credentials app=redfish_exporter error=no credentials found for target 192.168.1.1 target=192.168.1.1)

Do you get any idea to solve this issue?
Thanks.

deps rule missing from Makefile

Nice tool! But I see that the deps target specified on the all: rule of the Makefile has no rule in the Makefile? I was able to fix this with a

go get

before running make but I suspect we want to fix this in the Makefile?

Error on HP Proliant G9 servers

The exporter throws this error on HP Proliant G9 servers:

An error has occurred while serving metrics:

4 error(s) occurred:
* [from Gatherer #2] collected metric "redfish_chassis_power_powersupply_state" { label:<name:"chassis_id" value:"1" > label:<name:"power_supply" value:"HpServerPowerSupply" > label:<name:"power_supply_id" value:"" > label:<name:"resource" value:"power_supply" > gauge:<value:1 > } was collected before with the same name and label values
* [from Gatherer #2] collected metric "redfish_chassis_power_powersupply_health" { label:<name:"chassis_id" value:"1" > label:<name:"power_supply" value:"HpServerPowerSupply" > label:<name:"power_supply_id" value:"" > label:<name:"resource" value:"power_supply" > gauge:<value:1 > } was collected before with the same name and label values
* [from Gatherer #2] collected metric "redfish_chassis_power_powersupply_last_power_output_watts" { label:<name:"chassis_id" value:"1" > label:<name:"power_supply" value:"HpServerPowerSupply" > label:<name:"power_supply_id" value:"" > label:<name:"resource" value:"power_supply" > gauge:<value:122 > } was collected before with the same name and label values
* [from Gatherer #2] collected metric "redfish_chassis_power_powersupply_power_capacity_watts" { label:<name:"chassis_id" value:"1" > label:<name:"power_supply" value:"HpServerPowerSupply" > label:<name:"power_supply_id" value:"" > label:<name:"resource" value:"power_supply" > gauge:<value:500 > } was collected before with the same name and label values

Seems it's not properly reading power supply id. Is there any way to debug this?

ChassiCollector & SystemCollector- Error Scraping

Hey getting this error

2020/11/23 17:18:30 error error getting power data from chassis Chassis=Self app=redfish_exporter collector=ChassisCollector error=json: cannot unmarshal number into Go struct field PowerControl.PowerControl.MemberId of type string operation=chassis.Power() target=REDACTED

As well as

2020/11/23 17:20:46 error error getting processor data from system System=Self app=redfish_exporter collector=SystemCollector error=json: cannot unmarshal string into Go struct field .MaxSpeedMHz of type int operation=system.Processors() target=REDACTED
Any idea on what it could be?

cannot unmarshal number 0.9500012 into Go struct field Power.PowerSupplies

When querying Dell server with iDRAC, the exporter exits with a panic:

INFO[0012] Errors Getting powerinf from chassis : json: cannot unmarshal number 0.920000016689301 into Go struct field Power.PowerSupplies of type int  source="chassis_collector.go:246"
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0xe0 pc=0x86ca57]

goroutine 68 [running]:
github.com/jenningsloy318/redfish_exporter/collector.(*ChassisCollector).Collect(0xc000144de0, 0xc0002b4ea0)
	/Users/matejz/GIT/github/redfish_exporter/collector/chassis_collector.go:227 +0x897
github.com/jenningsloy318/redfish_exporter/collector.(*RedfishCollector).Collect.func1(0xc0000d5ef0, 0xc0002b4ea0, 0xa8a0c0, 0xc000144de0)
	/Users/matejz/GIT/github/redfish_exporter/collector/redfish_collector.go:91 +0x6b
created by github.com/jenningsloy318/redfish_exporter/collector.(*RedfishCollector).Collect
	/Users/matejz/GIT/github/redfish_exporter/collector/redfish_collector.go:89 +0x1e8

The problem is on Dell's side as it reports EfficiencyPercent as a ratio instead of percentage. According to Redfish docs, RF should return % with a number between 0 and 100. But it could still be a float.

I opened a ticket on gofish. I also wrote to Dell support and reported an issue.

Just opened this ticket here so it's known in case anyone else comes at the same problem.

How to show gather telemetry

Hi, i have a successful connection between Prometheus and the redfish exporter. I am able to show things such as the following.
image

What is the Prometheus search query that i can use to load up the PowerConsumedWatts telemetry that would appear when i make the following direct redfish call?
image

Or load up SEL entries such as the following?
image

Thank you.

collected metric "redfish_system_pcie_function_state" ... was collected before with the same name and label values on PERC H730 Mini

We have a few older Dell systems that have a PERC H730 Mini integrated RAID controller. On these systems, redfish_exporter (latest git: e28371d) throws a fatal error, while it used to work ok prior to the collection of more detailed PCIe metrics:

An error has occurred while serving metrics:

2 error(s) occurred:
* [from Gatherer #2] collected metric "redfish_system_pcie_function_state" { label:<name:"hostname" value:"" > label:<name:"pci_function_deviceclass" value:"UnclassifiedDevice" > label:<name:"pci_function_type" value:"Physical" > label:<name:"pcie_function_id" value:"0-0-0" > label:<name:"pcie_function_name" value:"PERC H730 Mini" > label:<name:"resource" value:"pcie_function" > gauge:<value:1 > } was collected before with the same name and label values
* [from Gatherer #2] collected metric "redfish_system_pcie_function_health_state" { label:<name:"hostname" value:"" > label:<name:"pci_function_deviceclass" value:"UnclassifiedDevice" > label:<name:"pci_function_type" value:"Physical" > label:<name:"pcie_function_id" value:"0-0-0" > label:<name:"pcie_function_name" value:"PERC H730 Mini" > label:<name:"resource" value:"pcie_function" > gauge:<value:1 > } was collected before with the same name and label values

I think perhaps these adapters don't report a "state" as the exporter expects it to do, this is the data from /redfish/v1/Systems/System.Embedded.1/Storage/RAID.Integrated.1-1:

{
  "@odata.context": "/redfish/v1/$metadata#Storage.Storage",
  "@odata.id": "/redfish/v1/Systems/System.Embedded.1/Storage/RAID.Integrated.1-1",
  "@odata.type": "#Storage.v1_4_0.Storage",
  "Description": "PERC H730 Mini",
  "Drives": [
    {
      "@odata.id": "/redfish/v1/Systems/System.Embedded.1/Storage/Drives/Disk.Bay.0:Enclosure.Internal.0-1:RAID.Integrated.1-1"
    },
    {
      "@odata.id": "/redfish/v1/Systems/System.Embedded.1/Storage/Drives/Disk.Bay.1:Enclosure.Internal.0-1:RAID.Integrated.1-1"
    },
    {
      "@odata.id": "/redfish/v1/Systems/System.Embedded.1/Storage/Drives/Disk.Bay.2:Enclosure.Internal.0-1:RAID.Integrated.1-1"
    },
    {
      "@odata.id": "/redfish/v1/Systems/System.Embedded.1/Storage/Drives/Disk.Bay.3:Enclosure.Internal.0-1:RAID.Integrated.1-1"
    },
    {
      "@odata.id": "/redfish/v1/Systems/System.Embedded.1/Storage/Drives/Disk.Bay.4:Enclosure.Internal.0-1:RAID.Integrated.1-1"
    },
    {
      "@odata.id": "/redfish/v1/Systems/System.Embedded.1/Storage/Drives/Disk.Bay.5:Enclosure.Internal.0-1:RAID.Integrated.1-1"
    },
    {
      "@odata.id": "/redfish/v1/Systems/System.Embedded.1/Storage/Drives/Disk.Bay.6:Enclosure.Internal.0-1:RAID.Integrated.1-1"
    },
    {
      "@odata.id": "/redfish/v1/Systems/System.Embedded.1/Storage/Drives/Disk.Bay.7:Enclosure.Internal.0-1:RAID.Integrated.1-1"
    }
  ],
  "[email protected]": 8,
  "Id": "RAID.Integrated.1-1",
  "Links": {
    "Enclosures": [
      {
        "@odata.id": "/redfish/v1/Chassis/Enclosure.Internal.0-1:RAID.Integrated.1-1"
      },
      {
        "@odata.id": "/redfish/v1/Chassis/System.Embedded.1"
      }
    ],
    "[email protected]": 2
  },
  "Name": "PERC H730 Mini",
  "Status": {
    "Health": "OK",
    "HealthRollup": "OK",
    "State": "Enabled"
  },
  "StorageControllers": [
    {
      "@odata.id": "/redfish/v1/Systems/System.Embedded.1/StorageControllers/RAID.Integrated.1-1",
      "Assembly": {
        "@odata.id": "/redfish/v1/Chassis/System.Embedded.1/Assembly"
      },
      "FirmwareVersion": "25.5.6.0009",
      "Identifiers": [
        {
          "DurableName": "544A842006943000",
          "DurableNameFormat": "NAA"
        }
      ],
      "Links": {},
      "Manufacturer": "DELL",
      "MemberId": "RAID.Integrated.1-1",
      "Model": "PERC H730 Mini",
      "Name": "PERC H730 Mini",
      "SpeedGbps": 12,
      "Status": {
        "Health": "OK",
        "HealthRollup": "OK",
        "State": "Enabled"
      },
      "SupportedControllerProtocols": [
        "PCIe"
      ],
      "SupportedDeviceProtocols": [
        "SAS",
        "SATA"
      ]
    }
  ],
  "[email protected]": 1,
  "Volumes": {
    "@odata.id": "/redfish/v1/Systems/System.Embedded.1/Storage/RAID.Integrated.1-1/Volumes"
  }
}

Prometheus server shows DOWN

I tried to run redfish exporter and prometheus server according to the README, but in the Targets section, it always shows DOWN, I can fetch data from exporter alone, but couldn't fetch from Prometheus server.

Unable to fetch all the HW monitoring data like Storage, RAID, HDD, Drive slot, BIOS info etc..

i am using this redfish exporter to fetch monitoring metrics from Supermicro Machines here i am getting few of the metrics but unable to fetch most of the important monitoring metrics like Storage, RAID, HDD, Drive slot, BIOS info etc.. or further more detailed metrics related to Hardware. can you please suggest some way to get metrics for these items also.

Please find the attached metric i am able to fetch only.
Redfish-EXP_on_Supermicro-HW.txt

Add command line options to disable specific collectors

When we scrape some of our redfish server one scrape takes up to 40 seconds.

We tried to disabled some collectors which information/metrics we don't need and got down to like 10 to 20 seconds. To disable we just removed them in the source code and build the exporter.

It would be very helpful if we could disable specific collectors via command line option or by config file. At start it will be maybe enough to disable one or two of the three collectors (manager, chassis, system). In the next step a more fine grained configuration to disable for example memory and storage in the system collector would be nice but definitely more work to implement.

Feature request: Log count and intrusion detection

Would it be possible to add metric for log entries and intrusion detection. Both of these would change system/chassis health to warning or critical. But at the moment if there is no way in seeing what is causing warning/critical state of system when there is intrusion detection or entries in system logs.
log entries: https://hostname/redfish/v1/Systems/1/LogServices/Log1/Entries

{
    "@odata.context": "/redfish/v1/$metadata#LogEntryCollection.LogEntryCollection",
    "@odata.type": "#LogEntryCollection.LogEntryCollection",
    "@odata.id": "/redfish/v1/Systems/1/LogServices/Log1/Entries",
    "Name": "Health Event Log Service Collection",
    "Description": "Collection of Health Event Logs",
    "[email protected]": 2,
    "Members": [
        {
            "@odata.id": "/redfish/v1/Systems/1/LogServices/Log1/Entries/1",
            "@odata.type": "#LogEntry.v1_3_0.LogEntry",
            "Id": "1",
            "Name": "Health Event Log Entry 1",
            "EntryType": "Event",
            "Severity": "Warning",
            "Created": "2020-07-07T10:21:02+00:00",
            "EntryCode": "Deassert",
            "SensorType": "Battery",
            "SensorNumber": 93,
            "Message": "BBU presence (StorageController0)",
            "MessageArgs": [
                "ArrayOfMessageArgs"
            ],
            "Links": {
                "Oem": {}
            },
            "Oem": {
                "Supermicro": {
                    "MarkAsAcknowledged": false,
                    "@odata.type": "#SmcLogEntryExtensions.v1_0_0.LogEntry",
                    "RawEventData": {
                        "EventDirAndType": "0xF0",
                        "SensorType": "0x29",
                        "EventData1": "0x02",
                        "EventData2": "0x00",
                        "EventData3": "0x00"
                    }
                }
            }
        },
        {
            "@odata.id": "/redfish/v1/Systems/1/LogServices/Log1/Entries/2",
            "@odata.type": "#LogEntry.v1_3_0.LogEntry",
            "Id": "2",
            "Name": "Health Event Log Entry 2",
            "EntryType": "Event",
            "Severity": "OK",
            "Created": "2020-07-07T10:21:29+00:00",
            "EntryCode": "Assert",
            "SensorType": "Battery",
            "SensorNumber": 93,
            "Message": "BBU presence (StorageController0)",
            "MessageArgs": [
                "ArrayOfMessageArgs"
            ],
            "Links": {
                "Oem": {}
            },
            "Oem": {
                "Supermicro": {
                    "@odata.type": "#SmcLogEntryExtensions.v1_0_0.LogEntry",
                    "RawEventData": {
                        "EventDirAndType": "0x70",
                        "SensorType": "0x29",
                        "EventData1": "0x02",
                        "EventData2": "0x00",
                        "EventData3": "0x00"
                    }
                }
            }
        }
    ]
}

Intrusion: https://hostname/redfish/v1/Chassis/1

{
    "@odata.context": "/redfish/v1/$metadata#Chassis.Chassis",
    "@odata.type": "#Chassis.v1_4_0.Chassis",
    "@odata.id": "/redfish/v1/Chassis/1",
    "Id": "1",
    "Name": "Computer System Chassis",
    "ChassisType": "RackMount",
    "Manufacturer": "Supermicro",
    "Model": "X11SPW-TF",
    "SKU": "",
    "SerialNumber": "XXXXXXXX",
    "PartNumber": "CSE-116TS-R504WBP",
    "AssetTag": "",
    "IndicatorLED": "Off",
    "Status": {
        "State": "Enabled",
        "Health": "Critical",
        "HealthRollup": "Critical"
    },
    "PhysicalSecurity": {
        "IntrusionSensorNumber": 170,
        "IntrusionSensor": "HardwareIntrusion",
        "IntrusionSensorReArm": "Manual"
    },
    "Power": {
        "@odata.id": "/redfish/v1/Chassis/1/Power"
    },
    "Thermal": {
        "@odata.id": "/redfish/v1/Chassis/1/Thermal"
    },
    "Links": {
        "ComputerSystems": [
            {
                "@odata.id": "/redfish/v1/Systems/1"
            }
        ],
        "PCIeDevices": [
            {
                "@odata.id": "/redfish/v1/Systems/1/PCIeDevices/NIC1"
            }
        ],
        "ManagedBy": [
            {
                "@odata.id": "/redfish/v1/Managers/1"
            }
        ]
    },
    "Oem": {
        "Supermicro": {
            "@odata.type": "#SmcChassisExtensions.v1_0_0.Chassis",
            "BoardSerialNumber": "XXXXXX",
            "GUID": "34313031-4D53-3CEC-EF06-B1D500000000",
            "BoardID": "0x953"
        }
    }
}

Encryption of Username and Password

The existing configuration file currently retains configuration information, including usernames and passwords, in an unencrypted plain text format, thereby exposing potential security vulnerabilities. To mitigate these risks, the proposal is to implement encryption for the configuration file contents mainly the password.

hostname is null

The hostname is coming up null on a supermicro server:

redfish_system_memory_state{hostname="",memory="P1-DIMMA1",memory_id="1",resource="memory"} 1

I've set the hostname via the IPMI interface.

Build Issues on Cent OS8

Can anyone help with this build issue?

make: go: Command not found
fatal: not a git repository (or any parent up to mount point /)
Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set).
fatal: not a git repository (or any parent up to mount point /)
Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set).

building binaries
go build -o build/redfish_exporter -ldflags '-X "main.Version=0.11.0" -X "main.BuildRevision=" -X "main.BuildBranch=" -X "main.BuildTime=2023-09-11 13:47:11-07:00" -X "main.BuildHost=Telco-Tools"'
make: go: Command not found
make: *** [Makefile:36: build] Error 127

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.