Giter Site home page Giter Site logo

wazuh / wazuh-ruleset Goto Github PK

View Code? Open in Web Editor NEW
409.0 72.0 203.0 33.88 MB

Wazuh - Ruleset

Home Page: https://wazuh.com

Python 70.91% Shell 8.23% XSLT 20.86%
wazuh ossec security loganalyzer compliance monitoring intrusion-detection policy-monitoring elasticsearch openscap

wazuh-ruleset's Introduction

Wazuh

Slack Email Documentation Documentation Coverity Twitter YouTube

Wazuh is a free and open source platform used for threat prevention, detection, and response. It is capable of protecting workloads across on-premises, virtualized, containerized, and cloud-based environments.

Wazuh solution consists of an endpoint security agent, deployed to the monitored systems, and a management server, which collects and analyzes data gathered by the agents. Besides, Wazuh has been fully integrated with the Elastic Stack, providing a search engine and data visualization tool that allows users to navigate through their security alerts.

Wazuh capabilities

A brief presentation of some of the more common use cases of the Wazuh solution.

Intrusion detection

Wazuh agents scan the monitored systems looking for malware, rootkits and suspicious anomalies. They can detect hidden files, cloaked processes or unregistered network listeners, as well as inconsistencies in system call responses.

In addition to agent capabilities, the server component uses a signature-based approach to intrusion detection, using its regular expression engine to analyze collected log data and look for indicators of compromise.

Log data analysis

Wazuh agents read operating system and application logs, and securely forward them to a central manager for rule-based analysis and storage. When no agent is deployed, the server can also receive data via syslog from network devices or applications.

The Wazuh rules help make you aware of application or system errors, misconfigurations, attempted and/or successful malicious activities, policy violations and a variety of other security and operational issues.

File integrity monitoring

Wazuh monitors the file system, identifying changes in content, permissions, ownership, and attributes of files that you need to keep an eye on. In addition, it natively identifies users and applications used to create or modify files.

File integrity monitoring capabilities can be used in combination with threat intelligence to identify threats or compromised hosts. In addition, several regulatory compliance standards, such as PCI DSS, require it.

Vulnerability detection

Wazuh agents pull software inventory data and send this information to the server, where it is correlated with continuously updated CVE (Common Vulnerabilities and Exposure) databases, in order to identify well-known vulnerable software.

Automated vulnerability assessment helps you find the weak spots in your critical assets and take corrective action before attackers exploit them to sabotage your business or steal confidential data.

Configuration assessment

Wazuh monitors system and application configuration settings to ensure they are compliant with your security policies, standards and/or hardening guides. Agents perform periodic scans to detect applications that are known to be vulnerable, unpatched, or insecurely configured.

Additionally, configuration checks can be customized, tailoring them to properly align with your organization. Alerts include recommendations for better configuration, references and mapping with regulatory compliance.

Incident response

Wazuh provides out-of-the-box active responses to perform various countermeasures to address active threats, such as blocking access to a system from the threat source when certain criteria are met.

In addition, Wazuh can be used to remotely run commands or system queries, identifying indicators of compromise (IOCs) and helping perform other live forensics or incident response tasks.

Regulatory compliance

Wazuh provides some of the necessary security controls to become compliant with industry standards and regulations. These features, combined with its scalability and multi-platform support help organizations meet technical compliance requirements.

Wazuh is widely used by payment processing companies and financial institutions to meet PCI DSS (Payment Card Industry Data Security Standard) requirements. Its web user interface provides reports and dashboards that can help with this and other regulations (e.g. GPG13 or GDPR).

Cloud security

Wazuh helps monitoring cloud infrastructure at an API level, using integration modules that are able to pull security data from well known cloud providers, such as Amazon AWS, Azure or Google Cloud. In addition, Wazuh provides rules to assess the configuration of your cloud environment, easily spotting weaknesses.

In addition, Wazuh light-weight and multi-platform agents are commonly used to monitor cloud environments at the instance level.

Containers security

Wazuh provides security visibility into your Docker hosts and containers, monitoring their behavior and detecting threats, vulnerabilities and anomalies. The Wazuh agent has native integration with the Docker engine allowing users to monitor images, volumes, network settings, and running containers.

Wazuh continuously collects and analyzes detailed runtime information. For example, alerting for containers running in privileged mode, vulnerable applications, a shell running in a container, changes to persistent volumes or images, and other possible threats.

WUI

The Wazuh WUI provides a powerful user interface for data visualization and analysis. This interface can also be used to manage Wazuh configuration and to monitor its status.

Modules overview

Modules overview

Security events

Overview

Integrity monitoring

Overview

Vulnerability detection

Overview

Regulatory compliance

Overview

Agents overview

Overview

Agent summary

Overview

Orchestration

Here you can find all the automation tools maintained by the Wazuh team.

Branches

  • master branch contains the latest code, be aware of possible bugs on this branch.
  • stable branch on correspond to the last Wazuh stable version.

Software and libraries used

Software Version Author License
bzip2 1.0.8 Julian Seward BSD License
cJSON 1.7.12 Dave Gamble MIT License
cPython 3.10.13 Guido van Rossum Python Software Foundation License version 2
cURL 8.5.0 Daniel Stenberg MIT License
Flatbuffers 23.5.26 Google Inc. Apache 2.0 License
GoogleTest 1.11.0 Google Inc. 3-Clause "New" BSD License
jemalloc 5.2.1 Jason Evans 2-Clause "Simplified" BSD License
Lua 5.3.6 PUC-Rio MIT License
libarchive 3.7.2 Tim Kientzle 3-Clause "New" BSD License
libdb 18.1.40 Oracle Corporation Affero GPL v3
libffi 3.2.1 Anthony Green MIT License
libpcre2 10.42.0 Philip Hazel BSD License
libplist 2.2.0 Aaron Burghardt et al. GNU Lesser General Public License version 2.1
libYAML 0.1.7 Kirill Simonov MIT License
liblzma 5.4.2 Lasse Collin, Jia Tan et al. GNU Public License version 3
Linux Audit userspace 2.8.4 Rik Faith LGPL (copyleft)
msgpack 3.1.1 Sadayuki Furuhashi Boost Software License version 1.0
nlohmann 3.7.3 Niels Lohmann MIT License
OpenSSL 3.0.12 OpenSSL Software Foundation Apache 2.0 License
pacman 5.2.2 Judd Vinet GNU Public License version 2 (copyleft)
popt 1.16 Jeff Johnson & Erik Troan MIT License
procps 2.8.3 Brian Edmonds et al. LGPL (copyleft)
RocksDB 8.3.2 Facebook Inc. Apache 2.0 License
rpm 4.18.2 Marc Ewing & Erik Troan GNU Public License version 2 (copyleft)
sqlite 3.45.0 D. Richard Hipp Public Domain (no restrictions)
zlib 1.3.1 Jean-loup Gailly & Mark Adler zlib/libpng License

Documentation

Get involved

Become part of the Wazuh's community to learn from other users, participate in discussions, talk to our developers and contribute to the project.

If you want to contribute to our project please don’t hesitate to make pull-requests, submit issues or send commits, we will review all your questions.

You can also join our Slack community channel and mailing list by sending an email to [email protected], to ask questions and participate in discussions.

Stay up to date on news, releases, engineering articles and more.

Authors

Wazuh Copyright (C) 2015-2023 Wazuh Inc. (License GPLv2)

Based on the OSSEC project started by Daniel Cid.

wazuh-ruleset's People

Contributors

adriiiprodri avatar albertomn86 avatar banditopazzo avatar branchnetconsulting avatar chemamartinez avatar crd1985 avatar cristgl avatar crolopez avatar danimegar avatar davidjiglesias avatar ddpbsd avatar eliasgrana avatar elwali10 avatar frgv avatar gkissand avatar iasdeoupxe avatar jesuslinares avatar jlruizmlg avatar jnasselle avatar k-embee avatar lopuiz avatar miguelcasaresrobles avatar mikykeane avatar psanchezr avatar santiago-bassett avatar sitorbj avatar snaow avatar stwilliamswfs avatar tjoserafael avatar vikman90 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

wazuh-ruleset's Issues

Update rules

The checks 1.4.2 and 1.4.3 in cis_rhel7_linux_rcl.txt must be updated to correctly identify Selinux:

1.4.2 Set selinux state
[CIS - RHEL7 - 1.4.2 - SELinux not set to enforcing {CIS: 1.4.2 RHEL7} {PCI_DSS: 2.2.4}] [any] [https://benchmarks.cisecurity.org/tools2/linux/CIS_Red_Hat_Enterprise_Linux_7_Benchmark_v1.1.0.pdf]

from
f:/etc/selinux/config -> r:SELINUX=enforcing;
to
f:/etc/selinux/config -> !r:SELINUX=enforcing;

1.4.3 Set seliux policy
[CIS - RHEL7 - 1.4.3 - SELinux policy not set to targeted {CIS: 1.4.3 RHEL7} {PCI_DSS: 2.2.4}] [any] [https://benchmarks.cisecurity.org/tools2/linux/CIS_Red_Hat_Enterprise_Linux_7_Benchmark_v1.1.0.pdf]

from
f:/etc/selinux/config -> r:SELINUXTYPE=targeted;
to
f:/etc/selinux/config -> !r:SELINUXTYPE=targeted;

BR

New rules require JSON Decoder in 3.0.0 but not yet released

It looks as though there's an incompatibility between the released rulesets and application. The new rules now rely on a JSON_Decoder, but wazuh-master is still at 2.1.1 which doesn't have it? Assume it's fixed in 3.0.0?

On Ubuntu 14.04, whazuh-master after update_ruleset.py:

2017/12/06 14:48:35 ossec-testrule: INFO: Reading decoder file ruleset/decoders/0006-json_decoders.xml.
2017/12/06 14:48:35 ossec-testrule: ERROR: (2110): Invalid decoder argument for plugin_decoder: 'JSON_Decoder'.
2017/12/06 14:48:35 ossec-testrule: CRITICAL: (1202): Configuration error at 'ruleset/decoders/0006-json_decoders.xml'. Exiting.

windows decoders won't pick correct account name for account lockout messages

Sample log message

2017 Jun 10 12:34:01 WinEvtLog: Security: AUDIT_SUCCESS(4740): 
Microsoft-Windows-Security-Auditing: (no user): no domain: SERVER.mydomain.local: 0x8000000000000000    
message: A user account was locked out.  
Subject:  
Security ID:  S-1-5-18  
Account Name:  SERVER$  
Account Domain:  MYDOMAIN  
Logon ID:  0x3e7  
Account That Was Locked Out:  Security ID:  S-1-5-21-1634102539-605432415-635521153-12345  Account Name:  user_account_name  Additional Information:  Caller Computer Name: OTHERSERVER

I've broken the message into multiple lines for readability.
Since the account_name is extracted by matching Account Name: xxxx, and there are two instances, only the first one is picked up. In this message that doesn't correspond to the account that was locked out.

Wazuh Version 2 Rule Update Error

Hi!

Not sure if you are aware but I think the "update_ruleset.py" is pulling down the incorrect files from github.

When run I get the error:
ERROR: Unkown: [Errno 2] No such file or directory: '/var/ossec/tmp/ruleset/downloads/wazuh-ruleset/update_ruleset.py'.

When looking at the directory:

-rw-r-----. 1 root ossec 3.4K Mar 14 10:37 CHANGELOG.md
-rw-r-----. 1 root ossec 47K Mar 14 10:37 ossec_ruleset.py
-rw-r-----. 1 root ossec 2.2K Mar 14 10:37 README.md
drw-r-----. 2 root ossec 4.0K Mar 14 10:37 rootcheck
drw-r-----. 9 root ossec 4.0K Mar 14 10:37 rules-decoders
-rw-r-----. 1 root ossec 83K Mar 14 10:37 Ruleset_Reference.ods
drw-r-----. 5 root ossec 74 Mar 14 10:37 tools
-rw-r-----. 1 root ossec 5 Mar 14 10:37 VERSION

Seems like it's pulling down the old: ossec_ruleset.py script instead of update_ruleset.py.

Thanks,
Steve

Some MS Log Events Not Showing Up

I have two identical Wazuh 6.2.3 deployments on ELK 6.2.3. One of them (#1) allows for Windows Event ID 4624 to show up in searches, while the other (#2) doesn't. I was able to find a single instance of Event ID (data.id) 4624 in #2's data and it was related to an "Multiple authentication failures followed by a success." alarm. Are these events being suppressed and not send to the ELK stack? I'm needing them in the data so I can report on certain successful logon events.

Additional / new logfile pattern for failed roundcube logins

I've noticed the following in the "errors" file of a current roundcube 1.2.5 installation:

[31-Aug-2017 13:50:32 +0200]: <foo> IMAP Error: Login failed for admin from 192.168.178.1. AUTHENTICATE PLAIN: Authentication failed. in /var/www/html/roundcube/program/lib/Roundcube/rcube_imap.php on line 193 (POST /roundcube/?_task=login&_action=login)

where it seems that https://github.com/wazuh/wazuh-ruleset/blob/master/decoders/0255-roundcube_decoders.xml doesn't match.

Cross-Ref: ossec/ossec-hids#1245

Failed Password Not Recognized

A pretty typical message from "/var/log/auth.log" doesn't seem to be recognized. This makes it difficult to wrap/filter out the rule, since it falls under the category of rule 1002:

Rule: 1002 fired (level 2) -> "Unknown problem somewhere in the system."

This is the message that doesn't seem to be caught properly:

May 23 11:27:29 a.b sshd[28184]: message repeated 2 times: [ Failed password for root from a.b.c.d port 38836 ssh2]

Our ruleset is updated daily. Let me know if you need any further info, or any action from my side.

Cheers

MJ12bot listed as known malicious user agent?

The following commit to ossec-hids (which seems to be only about nginx as well) ossec/ossec-hids@559af98#diff-d07df015283394cd19b1d54f94e11af0 has added the MJ12bot to the list of "known malicious user agents".

IMHO this is wrong and the rule there is not the place for such bots. Reason:

MJ12bot is a spider / crawler which is even obeying the robots.txt with the "Disallow" and the "Crawl-Delay" settings in there.

If some one wants to get rid of this MJ12bot the right place for this would be the robots.txt.

Cross-ref: ossec/ossec-hids#1317

ownCloud rules stopped to work (with Wazuh 3.2.0?)

Hi,

i was happy to see rules for ownCloud introduced some time ago (just found #64) and worked fine here.

For some reasons it seems those have stopped to work and only a syslog: User authentication failure. is fired on wrong passed credentials. I can't say for sure at what time this stopped to work, maybe on one of the last Wazuh upgrades?

Unfortunately the user @ghost which has implemented #64 is gone on github so hope it is ok to create this issue instead.

Rule 80006 Puppet Master: not run - address in use - Level 15?

Level 15 for puppet having address in use error is perhaps a bit excessive? Would it not be better suited to a 12 or 13?

<rule id="80006" level="15">
        <if_sid>80000</if_sid>
        <match>^Could not run: Address already in use</match>
        <description>Puppet Master: not run - address in use</description>
</rule>

"No chances of false positives" for Level 15 - but it seems it can easily happen if someone tries running puppet master if it's already running... https://ask.puppet.com/question/13876/error-could-not-run-address-already-in-use-bind2/

(I'm aware I can locally overwrite the level in local_rules.xml)

jjrbg

Rule 18107 ignore a system user

If any way that we can exclude for our audit system login . We have 2 Windows users who belong to a monitor program which login constantly to our server
Can we exclude by account_name ??
example ignore user monitor_app_user

thanks

sendmail-reject decoders extracting right square bracket as part of srcip field

These two child decoders in /var/ossec/ruleset/decoders/0275-sendmail_decoders.xml do not extract the IP correctly.

<decoder name="sendmail-reject-nodns">
  <parent>sendmail-reject</parent>
  <prematch>relay=[</prematch>
  <regex offset="after_prematch">^(\S+)]</regex>
  <order>srcip</order>
</decoder>

<decoder name="sendmail-reject-dns">
  <parent>sendmail-reject</parent>
  <prematch>relay=\S+ [</prematch>
  <regex offset="after_prematch">^(\S+)]</regex>
  <order>srcip</order>
</decoder>

at least from sendmail log lines like this:

Sep 29 17:11:02 ramp sendmail[21549]: v8TLB2x7021549: from=<[email protected]>, size=909, class=0, nrcpts=1, msgid=<[email protected]>, proto=ESMTP, daemon=MTA, relay=localhost.localdomain [127.0.0.1]
Sep 29 17:11:02 ramp sendmail[21549]: v8TLB2x7021549: from=<[email protected]>, size=909, class=0, nrcpts=1, msgid=<[email protected]>, proto=ESMTP, daemon=MTA, relay=[127.0.0.1]

In both cases the srcip field is extracted as "127.0.0.1]". The right square bracket should not be there. This creates many error messages in logstash when it tries to do a geoip lookup on the srcip field.

[2017-09-29T00:05:03,613][ERROR][logstash.filters.geoip   ] IP Field contained invalid IP address or hostname {:exception=>java.net.UnknownHostException: 127.0.0.1], :field=>"srcip".....

I changed the regex in the decoders like this:

<decoder name="sendmail-reject-nodns">
  <parent>sendmail-reject</parent>
  <prematch>relay=[</prematch>
  <regex offset="after_prematch">^(\d+.\d+.\d+.\d+)]</regex>
  <order>srcip</order>
</decoder>

<decoder name="sendmail-reject-dns">
  <parent>sendmail-reject</parent>
  <prematch>relay=\S+ [</prematch>
  <regex offset="after_prematch">^(\d+.\d+.\d+.\d+)]</regex>
  <order>srcip</order>
</decoder>

which appears to fix the problem for my examples, but I presume this would totally break extraction of an IPv6 relay address from sendmail logs. I suppose these two child decoders could become four: sendmail-reject-nodns-ipv4, sendmail-reject-dns-ipv4,sendmail-reject-nodns-ipv6, sendmail-reject-dns-ipv6. In that case then how do we fix the ipv6 versions of these child decoders so they do not pick up the closing right square bracket as part of the srcip field? Does anyone have any sendmail log examples they would be willing to share where the "relay=" part of the log includes an IPv6 address?

IOS decoders will not match

Hello, I noticed that the regex for the cisco ios decoders will not match the messages. Even the ones provided as examples in the decoder file.

- Aug 17 17:41:26 xyz.com 681: Aug 17 17:41:24.776 AEST: %SEC-6-IPACCESSLOGS: list 30 denied 124.254.75.141 1 packet
- Aug 20 11:33:41 RouterName 696: %SYS-5-CONFIG_I: Configured from
console by admin on vty0 (210.x.x.12)
- 681: Aug 17 17:41:24.776 AEST: %SEC-6-IPACCESSLOGS:
- 1348: .Jun 12 18:22:22 UTC: %SYS-5-CONFIG_I:
- 1348: *Jun 12 18:22:22 UTC: %SYS-5-CONFIG_I:
- 23: May 3 05:15:25.217 UTC: %SEC-6-IPACCESSLOGP:
- Possible regex:
"^%\w+-\d-\w+: |^\S\w\w+ \.\d \d\d:\S+ \w+: %\w+-\d-\w+:"
-->
<decoder name="cisco-ios">
<prematch>^%\w+-\d-\w+: </prematch>
</decoder>
<decoder name="cisco-ios">
<program_name />
<prematch>^%\w+-\d-\w+: </prematch>
</decoder>

Rule 31533 exception

Hello,

I have a server where rule 31533 is frequently triggered by myself when using my webmail (Kopano) which does lots of POST requests.
As for now, I increased frequency and reduced timeframe in order to avoid this behavoir.
Is there a way to "fork" that rule into another one that could be applied to a virtualhost specifically ?

I already encountered this with OSSEC some years ago about the same rule, had an open ticket and PR that lead to a long discussion about whether changing the base rule or not, regarding the ecosystem of more and more ajax apps (ossec/ossec-hids#461).

What do you guys think ? This rule renders Drupal / IMCE / Kopano WebApp / probably lots of other AJAX driven web apps unusable since they probably do more than 12 POST requests in 20 seconds.

Unable to block an ip for ssh brute force attack

I have tried this link

https://blog.wazuh.com/blocking-attacks-active-response/

But when i tried to use rule, it didnt ban any ip that used for ssh bruteforce attack...
What could go wrong with my configuration ?

below is my configuration in ossec.conf

<command>
    <name>firewall-drop</name>
    <executable>firewall-drop.sh</executable>
    <expect>srcip</expect>
    <timeout_allowed>yes</timeout_allowed>
</command>

Active-response

<active-response>
    <command>firewall-drop</command>
    <location>local</location>
    <rules_id>5712</rules_id> 
    <timeout>1800</timeout> 
</active-response>

Notes / hints to the OpenLDAP rules/decoders

Not quite sure if this is the correct place to report this or if its better to post this in https://github.com/wazuh/wazuh-documentation (please let me know if i should re-open this there).

I'm running an OpenLDAP server on Debian 9/stretch and was wondering about the missing logging of OpenLDAP. It seems that the default of the loglevel (at least on Debian) is "none" these days. So you won't get the logs shown in:

https://github.com/wazuh/wazuh-ruleset/blob/v2.1.1/decoders/0185-openldap_decoders.xml#L10-L19

One has to first create a ldif file like the following:

dn: cn=config
replace: olcLogLevel
olcLogLevel: 256

and run it like:

ldapmodify -Y EXTERNAL -H ldapi:/// -f ./logging.ldif

to raise the loglevel (see https://www.openldap.org/doc/admin24/slapdconfig.html) to 256 which is the minimum to get those logs.

There seems to be another issue with the srcip as well as (at least on my setup) the IP within those logs is from the application doing the authentication, not from the user:

Oct  2 19:51:22 example slapd[30864]: conn=1068 fd=19 ACCEPT from IP=192.168.0.2:59800 (IP=0.0.0.0:636)

(192.168.0.2 is the IP of the web application doing the LDAP authentication).

Error updating rulesets with wazuh-manager package.

Using the wazuh-manager package I tried updating the rules via:
/var/ossec/bin/update_ruleset.py

Fails with error:

### Wazuh ruleset ###
ERROR: Unkown: [Errno 2] No such file or directory: '/var/ossec/tmp/ruleset/downloads/wazuh-ruleset/update_ruleset.py'.
Exiting.

Tried updating via the automatic method script and it fails with:

sudo /var/ossec/update/ruleset/ossec_ruleset.py -a

OSSEC Wazuh Ruleset [0.100], 20170412

Creating a backup for folders '/var/ossec/etc' and '/var/ossec/rules'.
	Backup folder: /var/ossec/update/ruleset/backups/20170412_002
	[Done]

Checking directory structure.
	Error: It seems that we could not identify your installation. Install the ruleset manually or contact us for assistance.```


Which method are we supposed to use for weekly rules updates?

Thanks for all the effort btw! Great product!

Rule: 18107 (level 3) -> 'Windows Logon Success

Hi,

Currently migrating from wazuh 1.1 to 2.1

We have a number of rules custom rules that work off rule 18107 in 0220-msauth_rules.xml.

<rule id="18107" level="3"> <if_sid>18104</if_sid> <id>^528$|^540$|^673$|^4624$|^4769$</id> <description>Windows Logon Success.</description> <group>authentication_success,pci_dss_10.2.5,</group> </rule>

We no longer get alerts when we specify individual users in the user syntax

<rule id="100005" level="3"> <if_sid>18107</if_sid> <id>^4624$</id> <match>Logon Type: 10</match> <hostname>server1|server2</hostname> <user>^user1|^user2</user> <description>logon alert for serversx</description> </rule>

example of log from alert log

** Alert 1507730218.1797861: - windows,authentication_success,pci_dss_10.2.5,
2017 Oct 11 14:56:58 (server1) xx.xx.xx.xx->WinEvtLog
Rule: 18107 (level 3) -> 'Windows Logon Success.'
Src IP: xx.xx.xx.xx
User: (no user)
2017 Oct 11 14:56:57 WinEvtLog: Security: AUDIT_SUCCESS(4624): Microsoft-Windows-Security-Auditing: (no user): no domain: server1.testdomain.com: An account was successfully logged on. Subject: Security ID: S-1-5-18 Account Name: server1$ Account Domain: testdomain Logon ID: 0x Logon Type: 10 Impersonation Level: Impersonation New Logon: Security ID: S-1-5-21- Account Name: user1 Account Domain: server1 Logon ID: 0x Logon GUID: {00000000-0000-0000-0000-000000000000} Process Information: Process ID: 0x Process Name: C:\Windows\System32\winlogon.exe Network Information: Workstation Name: server1 Source Network Address: xx.xx.xx.xx Source Port: 0 Detailed Authentication Information: Logon Process: User32 Authentication Package: Negotiate Transited Services: - Package Name (NTLM only): - Key Length: 0 This event is generated when a logon session is created. It is generated on the computer that was accessed. The subject fields indicate the account on the local system which requested the logon. This is most commonly a service such as the Server service, or a local process such as Winlogon.exe or Ser
type: Security
subject.security_id: S-1-5-18
subject.account_name: server1$
subject.account_domain: testdomain
subject.logon_id: 0x
security_id:
account_name: user1
account_domain: server1
logon_type: 10

I've noticed in wazuh 2.1 that no matter what user you logon with the log reports the user as User: (no user) this doesnt happen in 1.1 its populate with the actual logged on user.

i think this might be the causing the alert not to generate...

CVE files for RedHat and SUSE as xccdf with an profile

I build an script wich gets the latest cve-patch files from RedHat and SUSE.
Then these files are convertet to an xccdf with an profile where all cechks are activated.
I like the html output from oscap when using xccdf and an profile.

These script is an bash script wich uses oscap and xsltproc.
It would be fine to add an CVE panel in the wazuh-kibana-app.

In the agent.conf insert:

<agent_config profile="sles11">
 <wodle name="open-scap">
    <timeout>1800</timeout>
    <interval>1d</interval>
    <scan-on-start>yes</scan-on-start>
    <content type="xccdf" path="cve-sles-11-ds.xml">
      <profile>xccdf_org.opensuse_profile_cve-sles-11</profile>
    </content>
  </wodle>
</agent_config>
<agent_config profile="sles12">
 <wodle name="open-scap">
    <timeout>1800</timeout>
    <interval>1d</interval>
    <scan-on-start>yes</scan-on-start>
    <content type="xccdf" path="cve-sles-12-ds.xml">
      <profile>xccdf_org.opensuse_profile_cve-sles-12</profile>
    </content>
  </wodle>
</agent_config>
<agent_config profile="rhel6">
 <wodle name="open-scap">
    <timeout>1800</timeout>
    <interval>1d</interval>
    <scan-on-start>yes</scan-on-start>
    <content type="xccdf" path="cve-redhat-6-ds.xml">
      <profile>xccdf_com.redhat.rhsa_profile_cve-rhel-6</profile>
    </content>
  </wodle>
</agent_config>
<agent_config profile="rhel7">
 <wodle name="open-scap">
    <timeout>1800</timeout>
    <interval>1d</interval>
    <scan-on-start>yes</scan-on-start>
    <content type="xccdf" path="cve-redhat-7-ds.xml">
      <profile>xccdf_com.redhat.rhsa_profile_cve-rhel-7</profile>
    </content>
  </wodle>
</agent_config>

Should i upload my files under wazuh-ruleset/tools/cve-xccdf

dpkg log not triggering rules 290x rules

Hi,

I have noticed that events from dpkg log aren't triggering rules (version 3.2.1).

Seems that the issue is in 0020-syslog_rules.xml for rule 2900:

  <rule id="2900" level="0">
    <decoded_as>windows-date-format</decoded_as>
    <regex offet="after_parent">^startup |^status |^remove |^configure |^install |^purge |^trigproc </regex>
    <description>Dpkg (Debian Package) log.</description>
  </rule>

Several thing are wrong here which may be the reason why events from dpkg are not triggering alerts as they should:

  • typo (regex offset)
  • caret before status
  • installed status missing

Corrected rule helped me solve this issue:

  <rule id="2900" level="0">
    <decoded_as>windows-date-format</decoded_as>
    <regex offset="after_parent">startup |status |remove |configure |install |purge |trigproc |installed</regex>
    <description>Dpkg (Debian Package) log.</description>
  </rule>```

Guide for own rules and decoders missing - OSSEC vs WAZUH feature set

Maybe I'm missed something, but how can I create own decoder with features which Wazuh brings ? I cannot follow same steps as for OSSEC as it seems that new rules is applied in different way. For example I would like to create simple decoder for Hashicorp Vault audit log. I look to Wazuh rulest (for example 0040-auditd_decoders.xml) and based on that I made this very raw decoder:

/var/ossec/etc/local_decoder.xml

<decoder name="vault">
    <prematch>^\.*{"time":"\.*","type":"\.*$</prematch>
</decoder>

<decoder name="vault-response">
  <parent>vault</parent>
  <regex>^{"time":"(\.*)","type":"(\.*)","error":"(\.*)","auth":{"client_token":"(\.*)","accessor":"(\.*)","display_name":"(\.*)","policies":null,"metadata":null},"request":{"id":"(\.*)","operation":"(\.*)","client_token":"(\.*)","path":"(\.*)","data":null,"remote_address":"(\.*)","wrap_ttl":0},"response":{"data":{"(\.*)"}}}$</regex>
 <order>time,type,errmsg,client_token,accessor,display_name,req_id,operation,request_token,path,remote_ip,response</order>
</decoder>

but when I restart OSSEC I get error:

code-xml: Wrong field 'time' in the order of decoder 'vault-response'

which make sense for original OSSEC as it support only specific order items.

http://ossec-docs.readthedocs.io/en/latest/syntax/head_decoders.html#element-decoder.order

updating ruleset wazuh 2.1.0

Hi there,
I've a wazuh installation, version 2.1.0.
I'm doing regular updates of ruleset as documented:
@Weekly cd /var/ossec/bin && ./update_ruleset.py -r

It has update ruleset to 3.1.0 but this ruleset has generated the following iusse:
2018/01/23 17:39:15 ossec-testrule: CRITICAL: (1202): Configuration error at 'ruleset/decoders/0006-json_decoders.xml'. Exiting.

I solved this just re-downloading the old 2.1.0 ruleset.

To prevent same issue in the future should I disable the update_ruleset?

Thank you

Possible missing "ovpn-server" program name in OpenVPN decoder

Currently the program_name in

https://github.com/wazuh/wazuh-ruleset/blob/master/decoders/0190-openvpn_decoders.xml#L9

is openvpn. However when running a current OpenVPN 2.4.3 from https://community.openvpn.net/openvpn/wiki/OpenvpnSoftwareRepos on Debian Stretch the following log entries are found in the /var/log/syslog:

Aug 30 16:08:29 foo ovpn-server[107]: 192.168.178.1 [admin] Peer Connection Initiated with [AF_INET6]::ffff:192.168.178.1:44902

This ovpn-server probably should be added to the program_name? Few more examples can be found on e.g. https://www.google.de/search?q="ovpn-server"

Support MariaDB with rules / decoders

This is a follow-up of https://groups.google.com/forum/#!topic/ossec-list/WtogBgInsUY where supporting the "audit plugin format"or MariaDB was discussed.

However it seems even the basic logging of MariaDB is missing as the syntax of the logs going to the logfile defined with log_error is quite different to the existing MySQL ones:

https://github.com/wazuh/wazuh-ruleset/blob/v2.1.1/rules/0295-mysql_rules.xml
https://github.com/wazuh/wazuh-ruleset/blob/v2.1.1/decoders/0150-mysql_decoders.xml

Below a few examples from a log_error which are probably far from being complete (Not sure about syslog).

2017-09-25  9:40:07 140509614809664 [Note] InnoDB: Using mutexes to ref count buffer pool pages
2017-09-25  9:40:07 140509614809664 [Note] InnoDB: The InnoDB memory heap is disabled
2017-09-25  9:40:07 140509614809664 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2017-09-25  9:40:07 140509614809664 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier
2017-09-25  9:40:07 140509614809664 [Note] InnoDB: Compressed tables use zlib 1.2.8
2017-09-25  9:40:07 140509614809664 [Note] InnoDB: Using Linux native AIO
2017-09-25  9:40:08 140509614809664 [Note] InnoDB: Using SSE crc32 instructions
2017-09-25  9:40:08 140509614809664 [Note] InnoDB: Initializing buffer pool, size = 10.0G
2017-09-25  9:40:08 140509614809664 [Note] InnoDB: Completed initialization of buffer pool
2017-09-25  9:40:11 140509614809664 [Note] InnoDB: Highest supported file format is Barracuda.
2017-09-25  9:40:11 140509614809664 [Note] InnoDB: The log sequence number 134976622415 in ibdata file do not match the log sequence number 141814250295 in the ib_logfiles!
2017-09-25  9:40:28 140509614809664 [Note] InnoDB: Processed 66 .ibd/.isl files
2017-09-25  9:40:46 140509614809664 [Note] InnoDB: Processed 121 .ibd/.isl files
2017-09-25  9:41:02 140509614809664 [Note] InnoDB: Processed 176 .ibd/.isl files
2017-09-25  9:41:20 140509614809664 [Note] InnoDB: Processed 220 .ibd/.isl files
2017-09-25  9:41:37 140509614809664 [Note] InnoDB: Processed 275 .ibd/.isl files
2017-09-25  9:41:55 140509614809664 [Note] InnoDB: Processed 330 .ibd/.isl files
2017-09-25  9:42:11 140509614809664 [Note] InnoDB: Processed 407 .ibd/.isl files
2017-09-25  9:42:27 140509614809664 [Note] InnoDB: Processed 517 .ibd/.isl files
2017-09-25  9:42:35 140509614809664 [Note] InnoDB: Restoring possible half-written data pages from the doublewrite buffer...
2017-09-25  9:42:46 140509614809664 [Note] InnoDB: Read redo log up to LSN=141814250496
InnoDB: Last MySQL binlog file position 0 279889702, file name /var/log/mysql/mariadb-bin.000145
2017-09-25  9:43:58 140509614809664 [Note] InnoDB: 128 rollback segment(s) are active.
2017-09-25  9:43:58 140509614809664 [Note] InnoDB: Waiting for purge to start
2017-09-25  9:43:58 140509614809664 [Note] InnoDB:  Percona XtraDB (http://www.percona.com) 5.6.36-82.1 started; log sequence number 141814250295
2017-09-25  9:44:03 140497406363392 [Note] InnoDB: Dumping buffer pool(s) not yet started
2017-09-25  9:44:03 140509614809664 [Note] Plugin 'FEEDBACK' is disabled.
2017-09-25  9:44:04 140509614809664 [Note] Recovering after a crash using /var/log/mysql/mariadb-bin
2017-09-25  9:44:56 140509614809664 [Note] Starting crash recovery...
2017-09-25  9:44:56 140509614809664 [Note] Crash recovery finished.
2017-09-25  9:44:58 140509614809664 [Note] Server socket created on IP: '0.0.0.0'.
2017-09-25  9:44:58 140509613980416 [Note] /usr/sbin/mysqld: Normal shutdown

2017-09-25  9:45:00 140509613677312 [Note] Event Scheduler: scheduler thread started with id 2
2017-09-25  9:45:00 140509614809664 [Note] /usr/sbin/mysqld: ready for connections.
Version: '10.1.26-MariaDB-0+deb9u1'  socket: '/var/run/mysqld/mysqld.sock'  port: 3306  Debian 9.1
2017-09-25  9:45:00 140509613980416 [Note] Event Scheduler: Killing the scheduler thread, thread id 2
2017-09-25  9:45:00 140509613980416 [Note] Event Scheduler: Waiting for the scheduler thread to reply
2017-09-25  9:45:00 140509613980416 [Note] Event Scheduler: Stopped
2017-09-25  9:45:00 140509613980416 [Note] Event Scheduler: Purging the queue. 0 events
2017-09-25  9:45:01 140497439934208 [Note] InnoDB: FTS optimize thread exiting.
2017-09-25  9:45:01 140509613980416 [Note] InnoDB: Starting shutdown...
2017-09-25  9:45:01 140509613980416 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2017-09-25  9:45:04 140509613980416 [Note] InnoDB: Shutdown completed; log sequence number 141814250305
2017-09-25  9:45:04 140509613980416 [Note] /usr/sbin/mysqld: Shutdown complete

2017-09-25 10:11:39 139667297788480 [Note] InnoDB: Using mutexes to ref count buffer pool pages
2017-09-25 10:11:39 139667297788480 [Note] InnoDB: The InnoDB memory heap is disabled
2017-09-25 10:11:39 139667297788480 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2017-09-25 10:11:39 139667297788480 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier
2017-09-25 10:11:39 139667297788480 [Note] InnoDB: Compressed tables use zlib 1.2.8
2017-09-25 10:11:39 139667297788480 [Note] InnoDB: Using Linux native AIO
2017-09-25 10:11:39 139667297788480 [Note] InnoDB: Using SSE crc32 instructions
2017-09-25 10:11:39 139667297788480 [Note] InnoDB: Initializing buffer pool, size = 10.0G
2017-09-25 10:11:39 139667297788480 [Note] InnoDB: Completed initialization of buffer pool
2017-09-25 10:11:39 139667297788480 [Note] InnoDB: Highest supported file format is Barracuda.
2017-09-25 10:11:39 139667297788480 [Note] InnoDB: 128 rollback segment(s) are active.
2017-09-25 10:11:39 139667297788480 [Note] InnoDB: Waiting for purge to start
2017-09-25 10:11:39 139667297788480 [Note] InnoDB:  Percona XtraDB (http://www.percona.com) 5.6.36-82.1 started; log sequence number 141814250305
2017-09-25 10:11:40 139655093647104 [Note] InnoDB: Dumping buffer pool(s) not yet started
2017-09-25 10:11:40 139667297788480 [Note] Plugin 'FEEDBACK' is disabled.
2017-09-25 10:11:41 139667297788480 [Note] Server socket created on IP: '0.0.0.0'.
2017-09-25 10:11:41 139667296959232 [Note] Event Scheduler: scheduler thread started with id 2
2017-09-25 10:11:41 139667297788480 [Note] /usr/sbin/mysqld: ready for connections.
Version: '10.1.26-MariaDB-0+deb9u1'  socket: '/var/run/mysqld/mysqld.sock'  port: 3306  Debian 9.1
2017-09-25 10:12:05 139667224206080 [ERROR] mysqld: Table './example' is marked as crashed and should be repaired
2017-09-25 10:12:05 139667224206080 [Warning] Checking table:   './example'
2017-09-25 10:25:05 139665896770304 [Note] /usr/sbin/mysqld: Normal shutdown

2017-09-25 10:25:05 139665896770304 [Note] Event Scheduler: Killing the scheduler thread, thread id 2
2017-09-25 10:25:05 139665896770304 [Note] Event Scheduler: Waiting for the scheduler thread to reply
2017-09-25 10:25:05 139665896770304 [Note] Event Scheduler: Stopped
2017-09-25 10:25:05 139665896770304 [Note] Event Scheduler: Purging the queue. 0 events
2017-09-25 10:25:08 139655118825216 [Note] InnoDB: FTS optimize thread exiting.
2017-09-25 10:25:08 139665896770304 [Note] InnoDB: Starting shutdown...
2017-09-25 10:25:09 139665896770304 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2017-09-25 10:25:12 139665896770304 [Note] InnoDB: Shutdown completed; log sequence number 141819266706
2017-09-25 10:25:12 139665896770304 [Note] /usr/sbin/mysqld: Shutdown complete

2017-09-25 10:25:35 139864032768576 [Note] InnoDB: Using mutexes to ref count buffer pool pages
2017-09-25 10:25:35 139864032768576 [Note] InnoDB: The InnoDB memory heap is disabled
2017-09-25 10:25:35 139864032768576 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2017-09-25 10:25:35 139864032768576 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier
2017-09-25 10:25:35 139864032768576 [Note] InnoDB: Compressed tables use zlib 1.2.8
2017-09-25 10:25:35 139864032768576 [Note] InnoDB: Using Linux native AIO
2017-09-25 10:25:35 139864032768576 [Note] InnoDB: Using SSE crc32 instructions
2017-09-25 10:25:35 139864032768576 [Note] InnoDB: Initializing buffer pool, size = 10.0G
2017-09-25 10:25:36 139864032768576 [Note] InnoDB: Completed initialization of buffer pool
2017-09-25 10:25:36 139864032768576 [Note] InnoDB: Highest supported file format is Barracuda.
2017-09-25 10:25:37 139864032768576 [Note] InnoDB: 128 rollback segment(s) are active.
2017-09-25 10:25:37 139864032768576 [Note] InnoDB: Waiting for purge to start
2017-09-25 10:25:37 139864032768576 [Note] InnoDB:  Percona XtraDB (http://www.percona.com) 5.6.36-82.1 started; log sequence number 141819266706
2017-09-25 10:25:38 139864032768576 [Note] Plugin 'FEEDBACK' is disabled.
2017-09-25 10:25:38 139851827476224 [Note] InnoDB: Dumping buffer pool(s) not yet started
2017-09-25 10:25:40 139864032768576 [Note] Server socket created on IP: '0.0.0.0'.
2017-09-25 10:25:40 139864031939328 [Note] Event Scheduler: scheduler thread started with id 2
2017-09-25 10:25:40 139864032768576 [Note] /usr/sbin/mysqld: ready for connections.
Version: '10.1.26-MariaDB-0+deb9u1'  socket: '/var/run/mysqld/mysqld.sock'  port: 3306  Debian 9.1
2017-10-01 21:47:17 139862629083904 [Note] /usr/sbin/mysqld: Normal shutdown

2017-10-01 21:47:17 139862629083904 [Note] Event Scheduler: Killing the scheduler thread, thread id 2
2017-10-01 21:47:17 139862629083904 [Note] Event Scheduler: Waiting for the scheduler thread to reply
2017-10-01 21:47:17 139862629083904 [Note] Event Scheduler: Stopped
2017-10-01 21:47:17 139862629083904 [Note] Event Scheduler: Purging the queue. 0 events
2017-10-01 21:47:18 139851852654336 [Note] InnoDB: FTS optimize thread exiting.
2017-10-01 21:47:18 139862629083904 [Note] InnoDB: Starting shutdown...
2017-10-01 21:47:18 139862629083904 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2017-10-01 21:47:24 139862629083904 [Note] InnoDB: Shutdown completed; log sequence number 145629392559
2017-10-01 21:47:25 139862629083904 [Note] /usr/sbin/mysqld: Shutdown complete

2017-10-01 21:47:33 140068894642752 [Note] InnoDB: Using mutexes to ref count buffer pool pages
2017-10-01 21:47:33 140068894642752 [Note] InnoDB: The InnoDB memory heap is disabled
2017-10-01 21:47:33 140068894642752 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2017-10-01 21:47:33 140068894642752 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier
2017-10-01 21:47:33 140068894642752 [Note] InnoDB: Compressed tables use zlib 1.2.8
2017-10-01 21:47:33 140068894642752 [Note] InnoDB: Using Linux native AIO
2017-10-01 21:47:33 140068894642752 [Note] InnoDB: Using SSE crc32 instructions
2017-10-01 21:47:33 140068894642752 [Note] InnoDB: Initializing buffer pool, size = 10.0G
2017-10-01 21:47:33 140068894642752 [Note] InnoDB: Completed initialization of buffer pool
2017-10-01 21:47:33 140068894642752 [Note] InnoDB: Highest supported file format is Barracuda.
2017-10-01 21:47:34 140068894642752 [Note] InnoDB: 128 rollback segment(s) are active.
2017-10-01 21:47:34 140068894642752 [Note] InnoDB: Waiting for purge to start
2017-10-01 21:47:34 140068894642752 [Note] InnoDB:  Percona XtraDB (http://www.percona.com) 5.6.36-82.1 started; log sequence number 145629392559
2017-10-01 21:47:35 140056689866496 [Note] InnoDB: Dumping buffer pool(s) not yet started
2017-10-01 21:47:35 140068894642752 [Note] Plugin 'FEEDBACK' is disabled.
2017-10-01 21:47:35 140068894642752 [Note] Server socket created on IP: '0.0.0.0'.
2017-10-01 21:47:35 140068893813504 [Note] Event Scheduler: scheduler thread started with id 2
2017-10-01 21:47:35 140068894642752 [Note] /usr/sbin/mysqld: ready for connections.
Version: '10.1.26-MariaDB-0+deb9u1'  socket: '/var/run/mysqld/mysqld.sock'  port: 3306  Debian 9.1
2017-10-02  0:20:11 140067493292800 [Note] /usr/sbin/mysqld: Normal shutdown

2017-10-02  0:20:11 140067493292800 [Note] Event Scheduler: Killing the scheduler thread, thread id 2
2017-10-02  0:20:11 140067493292800 [Note] Event Scheduler: Waiting for the scheduler thread to reply
2017-10-02  0:20:11 140067493292800 [Note] Event Scheduler: Stopped
2017-10-02  0:20:11 140067493292800 [Note] Event Scheduler: Purging the queue. 0 events
2017-10-02  0:20:11 140056715044608 [Note] InnoDB: FTS optimize thread exiting.
2017-10-02  0:20:11 140067493292800 [Note] InnoDB: Starting shutdown...
2017-10-02  0:20:12 140067493292800 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2017-10-02  0:20:15 140067493292800 [Note] InnoDB: Shutdown completed; log sequence number 145690444320
2017-10-02  0:20:15 140067493292800 [Note] /usr/sbin/mysqld: Shutdown complete

2017-10-02  0:20:38 139861115417152 [Note] InnoDB: Using mutexes to ref count buffer pool pages
2017-10-02  0:20:38 139861115417152 [Note] InnoDB: The InnoDB memory heap is disabled
2017-10-02  0:20:38 139861115417152 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2017-10-02  0:20:38 139861115417152 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier
2017-10-02  0:20:38 139861115417152 [Note] InnoDB: Compressed tables use zlib 1.2.8
2017-10-02  0:20:38 139861115417152 [Note] InnoDB: Using Linux native AIO
2017-10-02  0:20:38 139861115417152 [Note] InnoDB: Using SSE crc32 instructions
2017-10-02  0:20:38 139861115417152 [Note] InnoDB: Initializing buffer pool, size = 10.0G
2017-10-02  0:20:38 139861115417152 [Note] InnoDB: Completed initialization of buffer pool
2017-10-02  0:20:38 139861115417152 [Note] InnoDB: Highest supported file format is Barracuda.
2017-10-02  0:20:39 139861115417152 [Warning] InnoDB: Resizing redo log from 2*16384 to 2*65536 pages, LSN=145690444320
2017-10-02  0:20:39 139861115417152 [Warning] InnoDB: Starting to delete and rewrite log files.
2017-10-02  0:20:39 139861115417152 [Note] InnoDB: Setting log file ./ib_logfile101 size to 1024 MB
2017-10-02  0:21:01 139861115417152 [Note] InnoDB: Setting log file ./ib_logfile1 size to 1024 MB
2017-10-02  0:21:24 139861115417152 [Note] InnoDB: Renaming log file ./ib_logfile101 to ./ib_logfile0
2017-10-02  0:21:24 139861115417152 [Warning] InnoDB: New log files created, LSN=145690444812
2017-10-02  0:21:24 139861115417152 [Note] InnoDB: 128 rollback segment(s) are active.
2017-10-02  0:21:24 139861115417152 [Note] InnoDB: Waiting for purge to start
2017-10-02  0:21:24 139861115417152 [Note] InnoDB:  Percona XtraDB (http://www.percona.com) 5.6.36-82.1 started; log sequence number 145690444320
2017-10-02  0:21:25 139848912434944 [Note] InnoDB: Dumping buffer pool(s) not yet started
2017-10-02  0:21:25 139861115417152 [Note] Plugin 'FEEDBACK' is disabled.
2017-10-02  0:21:25 139861115417152 [Note] Server socket created on IP: '0.0.0.0'.
2017-10-02  0:21:25 139861114587904 [Note] Event Scheduler: scheduler thread started with id 2
2017-10-02  0:21:25 139861115417152 [Note] /usr/sbin/mysqld: ready for connections.
Version: '10.1.26-MariaDB-0+deb9u1'  socket: '/var/run/mysqld/mysqld.sock'  port: 3306  Debian 9.1

False positives with CIS benchmarks

Hi,

We are receiving some false positives when performing CIS benchmarsk against SLES11 servers. For instance, 9.2.8 Disable SSH Root Login is reporting insecure configuration, when it is set like this on the configuration file:

#PermitRootLogin yes
PermitRootLogin no

I've had a look at the content of the rule and I wonder if it could be because the regex doesn't seem to match:

f:/etc/ssh/sshd_config -> r:^#\s*PermitRootLogin;

Shouldn't it be with a $ sign after "PermitRootLogin"?, like this:

f:/etc/ssh/sshd_config -> r:^#\s*PermitRootLogin$;

thank you
regards

USB rules for linux

Hello Wazuh Team,

It is critical for me to be able to monitor USB devices being plugged/unplugged, but the rules currently only cover a device being plug in.

<rule id="81101" level="3">
<if_sid>81100</if_sid>
<match>New USB device found</match>
<description>Attached USB Storage</description>
</rule>

The rule will catch any devices being plugin, but the description is incorrect. However, in my testing I found that the kernel generates a special messages for mass storage devices (usb-storage).

Here is a sample of me plugging various devices in this order, 5x phone, 5x phone, webcam, usb stick:

[   56.437279] usb 1-1: New USB device found, idVendor=18d1, idProduct=4ee8
[   56.437285] usb 1-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[   56.437288] usb 1-1: Product: Nexus 5X
[   56.437291] usb 1-1: Manufacturer: LGE
[   56.437294] usb 1-1: SerialNumber: ffffffffffffffff
[   56.532567] usbcore: registered new interface driver snd-usb-audio
[   79.950261] usb 1-1: USB disconnect, device number 2

[   90.455399] usb 1-1: new high-speed USB device number 3 using xhci_hcd
[   90.804970] usb 1-1: New USB device found, idVendor=18d1, idProduct=4ee5
[   90.804979] usb 1-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[   90.804984] usb 1-1: Product: Nexus 5X
[   90.804988] usb 1-1: Manufacturer: LGE
[   90.804992] usb 1-1: SerialNumber: ffffffffffffffff
[  106.878195] usb 1-1: USB disconnect, device number 3

[  338.996421] usb 1-1: new high-speed USB device number 4 using xhci_hcd
[  339.335342] usb 1-1: New USB device found, idVendor=1bcf, idProduct=2b8d
[  339.335345] usb 1-1: New USB device strings: Mfr=1, Product=2, SerialNumber=0
[  339.335348] usb 1-1: Product: Integrated_Webcam_HD
[  339.335349] usb 1-1: Manufacturer: SunplusIT Inc
[  339.423855] media: Linux media interface: v0.10
[  339.435537] Linux video capture interface: v2.00
[  339.456171] uvcvideo: Found UVC 1.00 device Integrated_Webcam_HD (1bcf:2b8d)
[  339.466167] input: Integrated_Webcam_HD as /devices/pci0000:00/0000:00:0c.0/usb1/1-1/1-1:1.0/input/input7
[  339.466953] usbcore: registered new interface driver uvcvideo
[  339.466957] USB Video Class driver (1.1.1)
[  360.620906] usb 1-1: USB disconnect, device number 4

[  369.763694] usb 1-1: new high-speed USB device number 5 using xhci_hcd
[  370.115393] usb 1-1: New USB device found, idVendor=ffff, idProduct=5678
[  370.115396] usb 1-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[  370.115397] usb 1-1: Product: Disk 2.0
[  370.115398] usb 1-1: Manufacturer: USB
[  370.115399] usb 1-1: SerialNumber: FFFFFFFFFFFFFFFFFFF
[  370.157487] usb-storage 1-1:1.0: USB Mass Storage device detected
[  370.158188] scsi host3: usb-storage 1-1:1.0
[  370.158277] usbcore: registered new interface driver usb-storage
[  370.161056] usbcore: registered new interface driver uas
[  371.158057] scsi 3:0:0:0: Direct-Access     VendorCo ProductCode      2.00 PQ: 0 ANSI: 4
[  371.159568] sd 3:0:0:0: Attached scsi generic sg1 type 0
[  371.160830] sd 3:0:0:0: [sdb] 7863376 512-byte logical blocks: (4.03 GB/3.75 GiB)
[  371.161171] sd 3:0:0:0: [sdb] Write Protect is off
[  371.161173] sd 3:0:0:0: [sdb] Mode Sense: 33 00 00 00
[  371.161458] sd 3:0:0:0: [sdb] No Caching mode page found
[  371.161479] sd 3:0:0:0: [sdb] Assuming drive cache: write through
[  371.169084]  sdb: sdb1
[  371.172642] sd 3:0:0:0: [sdb] Attached SCSI removable disk
[  499.576588] usb 1-1: USB disconnect, device number 5

It seems that the USB strings will be printed on separate lines, therefore separate events in wazuh. I'm not sure if those could be combined together in the rules, my guess is no. That information would certainly be useful, as for example some devices provide a serial number.

Missing logging of REJECT/DENY iptables in 0060-firewall_rules.xml

When checking https://github.com/wazuh/wazuh-ruleset/blob/a275d5c6144b47ca71f945f3a709bf2328e79155/rules/0060-firewall_rules.xml it seems only DROP messages are evaluated there.

Depending on the use case an administrator might choose REJECT/DENY for the iptables rules as well (see e.g. http://www.linux-admins.net/2013/07/drop-versus-reject-packet.html)

I'm also quite unsure if this comment here is valid these days:

<!-- We don't log firewall events, because they go
- to their own log file.
-->

Ref: ossec/ossec-hids#1267

Debian SCAP/OVAL content

the scap_content/cve-debian-oval.xml file is outdated, and seems to be an old version of the oval-definitions-2016.xml definitions file.

For some time, the Debian OVAL files are released "per-release" (oval-definitions-jessie.xml) instead of "per-year" (oval-definitions-2016.xml)

see https://www.debian.org/security/oval/

Since these files are updated daily or even multiple times per day, there might a better update process than this git repository. (maybe add options to update_ruleset.py to fetch it over HTTPS?)

New/additional postifx logs where srcip can't be extracted

Missing srcip

Sep 21 22:52:12 example postfix/smtpd[14927]: warning: hostname mail.example.com does not resolve to address 192.168.0.1: Name or service not known

Result of ossec-logtest:

**Phase 1: Completed pre-decoding.
       full event: 'Sep 21 22:52:12 example postfix/smtpd[14927]: warning: hostname mail.example.com does not resolve to address 192.168.0.1: Name or service not known'
       hostname: 'example'
       program_name: 'postfix/smtpd'
       log: 'warning: hostname mail.example.com does not resolve to address 192.168.0.1: Name or service not known'

**Phase 2: Completed decoding.
       decoder: 'postfix'

**Phase 3: Completed filtering (rules).
       Rule id: '3398'
       Level: '6'
       Description: 'Postfix: Illegal address from unknown sender'
**Alert to be generated.

Missing catch of rule 3398

Sep 21 22:52:12 example postfix/smtpd[14927]: warning: hostname example.com does not resolve to address 192.168.0.1: Name or service not known

Result of ossec-logtest:

**Phase 1: Completed pre-decoding.
       full event: 'Sep 21 22:52:12 example postfix/smtpd[14927]: warning: hostname example.com does not resolve to address 192.168.0.1: Name or service not known'
       hostname: 'example'
       program_name: 'postfix/smtpd'
       log: 'warning: hostname example.com does not resolve to address 192.168.0.1: Name or service not known'

**Phase 2: Completed decoding.
       decoder: 'postfix'

**Phase 3: Completed filtering (rules).
       Rule id: '3395'
       Level: '0'
       Description: 'Grouping of the postfix warning rules.'

Fortinet Decoders doesn't work properly if log comes from FortiAnalyzer.

Looking at 0390-fortigate_rules.xml:
Bottom of the file:

<!--
Mar 22 19:21:00 10.10.10.10 date=2016-03-22 time=19:20:46 devname=Text devid=FGT3HD0000000000 logid=0000018000 type=anomaly subtype=anomaly level=alert vd="root" severity=critical srcip=10.10.10.35 dstip=10.10.10.84 srcintf="port2" sessionid=0 action=detected proto=6 service=tcp/36875 count=1903 attack="tcp_syn_flood" srcport=32835 dstport=2960 attackid=100663396 profile="DoS-policy1" ref="http://www.fortinet.com/ids/VID100663396" msg="anomaly: tcp_syn_flood, 2001 > threshold 2000, repeats 1903 times" crscore=50 crlevel=critical

Mar 22 19:21:00 10.10.10.10 date=2016-03-22 time=19:20:46 devname=Text devid=FGT3HD0000000000 logid=0000018000 type=anomaly subtype=anomaly level=alert vd="root" severity=critical srcip=10.10.10.61 dstip=10.10.10.84 srcintf="port2" sessionid=0 action=dropped proto=6 service=NONE count=9 attack="IP.Bad.Header" attackid=127 profile="N/A" ref="http://www.fortinet.com/ids/VID127" msg="anomaly: IP.Bad.Header, repeats 9 times" crscore=50 crlevel=critical
-->
<rule id="81628" level="11">
    <if_sid>81603</if_sid>
    <match>attack</match>
    <action>detected</action>
    <description>Fortigate Attack Detected</description>
    <group>attack,</group>
</rule>

<rule id="81629" level="3">
    <if_sid>81603</if_sid>
    <match>attack</match>
    <action>dropped</action>
    <description>Fortigate Attack Dropped</description>
    <group>attack,</group>
</rule>

These work for the samples provided. However, when tested with an example from a fortiAnalyzer, action becomes alert (appears to match the level, rather than action):

Test Message :

Jun 20 10:13:47 10.0.0.21 severity=critical from=FORTAN.TEST(FAZ-VM00000XXXXX) trigger=IPS - Critical Severity log="date=2017-06-20 time=10:16:52 clusterid=FGHA001893200000_CID logver=52 itime=1497950025 devname=FWD001 devid=FG800C3914800000 logid=16384 type=utm subtype=ips eventtype=signature level=alert vd=EXT_SVCS severity=critical srcip=10.1.10.104 dstip=10.0.0.80 srcintf="EXTRNL" dstintf="PUBDMZ" sessionid=3691241729 action=dropped proto=6 service=HTTP attack="Apache.Struts.Jakarta.Multipart.Parser.Code.Execution" srcport=48196 dstport=80 direction=outgoing attackid=43745 profile="IPS_DEFAULT" ref="http://www.fortinet.com/ids/VID43745" user="" incidentserialno=755485169 msg="applications3: Apache.Struts.Jakarta.Multipart.Parser.Code.Execution," crscore=50 crlevel=critical"

**Phase 1: Completed pre-decoding.
full event: 'Jun 20 10:13:47 10.0.0.21 severity=critical from=FORTAN.TEST(FAZ-VM00000XXXXX) trigger=IPS - Critical Severity log="date=2017-06-20 time=10:16:52 clusterid=FGHA001893200000_CID logver=52 itime=1497950025 devname=FWD001 devid=FG800C3914800000 logid=16384 type=utm subtype=ips eventtype=signature level=alert vd=EXT_SVCS severity=critical srcip=10.1.10.104 dstip=10.0.0.80 srcintf="EXTRNL" dstintf="PUBDMZ" sessionid=3691241729 action=dropped proto=6 service=HTTP attack="Apache.Struts.Jakarta.Multipart.Parser.Code.Execution" srcport=48196 dstport=80 direction=outgoing attackid=43745 profile="IPS_DEFAULT" ref="http://www.fortinet.com/ids/VID43745" user="" incidentserialno=755485169 msg="applications3: Apache.Struts.Jakarta.Multipart.Parser.Code.Execution," crscore=50 crlevel=critical"'
hostname: '10.0.0.21'
program_name: '(null)'
log: 'severity=critical from=FORTAN.TEST(FAZ-VM00000XXXXX) trigger=IPS - Critical Severity log="date=2017-06-20 time=10:16:52 clusterid=FGHA001893200000_CID logver=52 itime=1497950025 devname=FWD001 devid=FG800C3914800000 logid=16384 type=utm subtype=ips eventtype=signature level=alert vd=EXT_SVCS severity=critical srcip=10.1.10.104 dstip=10.0.0.80 srcintf="EXTRNL" dstintf="PUBDMZ" sessionid=3691241729 action=dropped proto=6 service=HTTP attack="Apache.Struts.Jakarta.Multipart.Parser.Code.Execution" srcport=48196 dstport=80 direction=outgoing attackid=43745 profile="IPS_DEFAULT" ref="http://www.fortinet.com/ids/VID43745" user="" incidentserialno=755485169 msg="applications3: Apache.Struts.Jakarta.Multipart.Parser.Code.Execution," crscore=50 crlevel=critical"'

**Phase 2: Completed decoding.
decoder: 'fortigate-firewall-v5'
action: 'alert'
srcip: '10.1.10.104'
dstip: '10.0.0.80'
srcport: '48196'
dstport: '80'
extra_data: '"applications3: Apache.Struts.Jakarta.Multipart.Parser.Code.Execution," crscore=50 crlevel=critical"'

**Phase 3: Completed filtering (rules).
Rule id: '81603'
Level: '0'
Description: 'Fortigate messages grouped.'

The Action being set to alert seems to be the issue, probably in the decoder.

Problem with kernel decoder - broken?

There is really strange problem with kernel decoder..

With ossec-logtest this following log entry works fine and it is correctly recognized (it is directly from the kernel decoder file example):

Aug 17 10:03:37 myhostname kernel: SFW2-INext-DROP-DEFLT IN=eth0 OUT= MAC=00:08:02:da:c8:51:00:0f:f7:74:31:8a:08:00 SRC=1.2.3.36 DST=1.2.3.194 LEN=28 TOS=0x00 PREC=0x00 TTL=44 ID=60200 PROTO=ICMP TYPE=8 CODE=0 ID=10466 SEQ=21229

However, if i echo the same line to /var/log/messages it will not work.
Even if I enable: yes, it still wont appear in archives.log.

I noticed the same behaviour when we tried to send syslog messages from VYOS firewall to the syslog listener and they just disapper (yes, even when logall is enabled). However the logtest says the log entry is just fine.

_**Phase 1: Completed pre-decoding.
full event: 'Aug 17 10:03:37 myhostname kernel: SFW2-INext-DROP-DEFLT IN=eth0 OUT= MAC=00:08:02:da:c8:51:00:0f:f7:74:31:8a:08:00 SRC=1.2.3.36 DST=1.2.3.194 LEN=28 TOS=0x00 PREC=0x00 TTL=44 ID=60200 PROTO=ICMP TYPE=8 CODE=0 ID=10466 SEQ=21229'
timestamp: 'Aug 17 10:03:37'
hostname: 'myhostname'
program_name: 'kernel'
log: 'SFW2-INext-DROP-DEFLT IN=eth0 OUT= MAC=00:08:02:da:c8:51:00:0f:f7:74:31:8a:08:00 SRC=1.2.3.36 DST=1.2.3.194 LEN=28 TOS=0x00 PREC=0x00 TTL=44 ID=60200 PROTO=ICMP TYPE=8 CODE=0 ID=10466 SEQ=21229'

**Phase 2: Completed decoding.
decoder: 'kernel'
action: 'SFW2-INext-DROP-DEFLT'
srcip: '1.2.3.36'
dstip: '1.2.3.194'
proto: 'ICMP'

**Phase 3: Completed filtering (rules).
Rule id: '4100'
Level: '0'
Description: 'Firewall rules grouped.'_

If I remove the "SFW-INext-DROP-DEFLT" after the "kernel:", it appears in the archives.log but ossec-logtest is not able to parse it anymore.

update_ruleset script name doesn't match with Wazuh 3.1.0

Wazuh 3.1.0 script to update ruleset is: /var/ossec/bin/update_ruleset, however script present on branch 3.1 is update_ruleset.py. This effect makes that, once first update_ruleset is executed, another update_ruleset.py appears on /var/ossec/bin/.

Expected behaviour:

  • Keep original name update_ruleset (by removing suffix .py)

OSSEC-DBD error on invalid element in rules file.

Versions:
Wazuh 3.3.0 (bug was present in previous versions)
ossec-dbd enable = true

Problem:

ossec-db will not start properly and presents errors regarding invalid value in an element.

ossec.jason errors:

{"timestamp":"2018/03/13 12:11:11","tag":"ossec-dbd","level":"error","description":"(1235): Invalid value for element 'options': no_full_log."}
{"timestamp":"2018/03/13 12:11:11","tag":"ossec-dbd","level":"error","description":"(1238): Invalid value for element 'options': no_full_log"}
{"timestamp":"2018/03/13 12:11:11","tag":"ossec-dbd","level":"error","description":"(1220): Error loading the rules: 'ruleset/rules/0520-vulnerability-detector.xml'."}
{"timestamp":"2018/03/13 12:11:11","tag":"ossec-dbd","level":"critical","description":"(1202): Configuration error at '/var/ossec/etc/ossec.conf'."}
{"timestamp":"2018/03/13 12:11:19","tag":"ossec-remoted","level":"error","description":"(1501): No IP or network allowed in the access list for syslog. No reason for running it. Exiting."}
{"timestamp":"2018/03/13 12:11:27","tag":"ossec-dbd","level":"error","description":"(1235): Invalid value for element 'options': no_full_log."}
{"timestamp":"2018/03/13 12:11:27","tag":"ossec-dbd","level":"error","description":"(1238): Invalid value for element 'options': no_full_log"}
{"timestamp":"2018/03/13 12:11:27","tag":"ossec-dbd","level":"error","description":"(1220): Error loading the rules: 'ruleset/rules/0520-vulnerability-detector.xml'."}
{"timestamp":"2018/03/13 12:11:27","tag":"ossec-dbd","level":"critical","description":"(1202): Configuration error at '/var/ossec/etc/ossec.conf'."}
{"timestamp":"2018/03/13 12:13:48","tag":"ossec-remoted","level":"error","description":"(1501): No IP or network allowed in the access list for syslog. No reason for running it. Exiting."}
{"timestamp":"2018/03/13 12:13:58","tag":"ossec-dbd","level":"error","description":"(1235): Invalid value for element 'options': no_full_log."}
{"timestamp":"2018/03/13 12:13:58","tag":"ossec-dbd","level":"error","description":"(1238): Invalid value for element 'options': no_full_log"}
{"timestamp":"2018/03/13 12:13:58","tag":"ossec-dbd","level":"error","description":"(1220): Error loading the rules: 'ruleset/rules/0520-vulnerability-detector.xml'."}
{"timestamp":"2018/03/13 12:13:58","tag":"ossec-dbd","level":"critical","description":"(1202): Configuration error at '/var/ossec/etc/ossec.conf'."}
{"timestamp":"2018/03/13 12:14:44","tag":"ossec-dbd","level":"error","description":"(1235): Invalid value for element 'options': no_full_log."}
{"timestamp":"2018/03/13 12:14:44","tag":"ossec-dbd","level":"error","description":"(1238): Invalid value for element 'options': no_full_log"}
{"timestamp":"2018/03/13 12:14:44","tag":"ossec-dbd","level":"error","description":"(1220): Error loading the rules: 'ruleset/rules/0520-vulnerability-detector.xml'."}
{"timestamp":"2018/03/13 12:14:44","tag":"ossec-dbd","level":"critical","description":"(1202): Configuration error at '/var/ossec/etc/ossec.conf'."}

Referencing the following file:
/var/ossec/ruleset/rules/0520-vulnerability-detector.xml

Note: Commenting out the 'no_full_log' element allows ossec-dbd to start normally.

<!-- <options>no_full_log</options> -->

What could OSSEC do when some anomaly where already spreading ( a.k.a virus ) on the system ?

Is there any rules, or some active-response command that would stop some virus or anomaly from spreading, infect and crushing our system. Is there ever any demostration regarding those condition ? where could i find it ?

what i have tried is like blocking an ip for brute force attack (FTP, SSH), but what about if the hacker are in some condition could prevent the ossec rules blocking ip, and he started to spread some unwanted files.document as known as virus ? Is there any footage or demonstration about that ?

sshd rule: Bad protocol version identification missing srcip

Just noticed the following today on a OpenSSH server running on Debian Stretch:

Wazuh Notification.
2017 Sep 01 17:42:58

Received From: foo->/var/log/auth.log
Rule: 5701 fired (level 8) -> "sshd: Possible attack on the ssh server (or version gathering)."
Portion of the log(s):

Sep  1 17:42:56 foo sshd[32010]: Bad protocol version identification '\003' from 192.168.178.1 port 857



 --END OF NOTIFICATION

Seems this alert is missing the srcip because of an additional $ in the decoder:

<decoder name="ssh-scan2">
  <parent>sshd</parent>
  <prematch>^Did not receive identification|^Bad protocol version</prematch>
  <regex offset="after_prematch"> from (\S+)$</regex>
  <order>srcip</order>
</decoder>

Might be possible that something similar to d6d36e6 needs to be done here.

False negative for Shellshock pattern detection

There is currently the following regex used for the detection of shellshock attacks:

<regex>"\(\)\s*{\s*:;\s*}\s*;</regex>
https://github.com/wazuh/wazuh-ruleset/blob/2.1/rules/0250-apache_rules.xml#L308
https://github.com/wazuh/wazuh-ruleset/blob/2.1/rules/0245-web_rules.xml#L228

This is quite too strict and is missing shellshock attacks doing something like:

192.168.2.100 - - [02/Nov/2015:01:35:55 +0100] "GET /cgi-bin/test.sh HTTP/1.1" 404 292 "-" "() { foo:;};/usr/bin/perl ..."

Some example patterns which are not detected by that rules:

() { ignored;};
from https://pastebin.com/166f8Rjx

() { gry;};
from https://github.com/gry/shellshock-scanner/blob/master/shellshock_scanner.py#L31

Similar could be done like the following:
() { foo:; };

There is also another pattern for CVE-2014-6278 not checked at all:

() { _; } >_[$($())] {
from https://github.com/gry/shellshock-scanner/blob/master/shellshock_scanner.py#L32

with variants like the following which is also working:

() { _; foo; } >_[$($())] {

Checkpoint decoders not working properly

Checkpoint decoders are not working properly. The ones included doesn't fit most part of the logs found on real systems:

https://groups.google.com/forum/?hl=en#!searchin/ossec-list/checkpoint%7Csort:date/ossec-list/U7_kTZKRDCc/U3DiEXiXCwAJ

https://groups.google.com/forum/#!topic/wazuh/r6p_pl9hgSo

https://ossec-docs.readthedocs.io/en/latest/log_samples/firewalls/checkpoint.html

These decoders need to be more flexible, and maybe not always can be possible to get the program name pre-decoded.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.