Giter Site home page Giter Site logo

domain-protect / domain-protect Goto Github PK

View Code? Open in Web Editor NEW
370.0 9.0 58.0 19.2 MB

OWASP Domain Protect - prevent subdomain takeover

Home Page: https://owasp.org/www-project-domain-protect/

License: Other

Python 70.01% HCL 24.46% Smarty 4.08% Shell 1.01% HTML 0.44%
security security-tools bugbounty dns aws terraform serverless cloudflare owasp

domain-protect's Introduction

OWASP Domain Protect

Release version Python 3.x License OWASP Maturity

Prevent subdomain takeover ...

Alt text

... with serverless cloud infrastructure

Alt text

OWASP Global AppSec Dublin - talk and demo

Global AppSec Dublin 2023

Features

  • scan Amazon Route53 across an AWS Organization for domain records vulnerable to takeover
  • scan Cloudflare for vulnerable DNS records
  • take over vulnerable subdomains yourself before attackers and bug bounty researchers
  • automatically create known issues in Bugcrowd or HackerOne
  • vulnerable domains in Google Cloud DNS can be detected by Domain Protect for GCP
  • manual scans of cloud accounts with no installation

Installation

Collaboration

We welcome collaborators! Please see the OWASP Domain Protect website for more details.

Documentation

Manual scans - AWS
Manual scans - CloudFlare
Architecture
Database
Reports
Automated takeover optional feature
Cloudflare optional feature
Bugcrowd optional feature
HackerOne optional feature
Vulnerability types
Vulnerable A records (IP addresses) optional feature
Requirements
Installation
Slack Webhooks
AWS IAM policies
CI/CD
Development
Code Standards
Automated Tests
Manual Tests
Conference Talks and Blog Posts

Limitations

This tool cannot guarantee 100% protection against subdomain takeovers.

domain-protect's People

Contributors

adamwmaj avatar altho1 avatar christophetd avatar com6056 avatar dependabot[bot] avatar derrickklisevevo avatar eramvn avatar jbond79 avatar jxdv avatar paulschwarzenberger avatar ruddles avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

domain-protect's Issues

Recommendation: Add the ability to diff public IPs agains DNS records

1- For each hosted zone id do list_resource_record_sets for A records and only store public IPs in your list
2- Describe instances and get public IPs and add the the list
3- Describe elastic IPs and add to the list
4- Describe ENIs i.e ec2.describe_network_interfaces().get("NetworkInterfaces") and get public IPs, add to the list

finally, diff and if IP exist in route53 but the not in your list of public IPs, you have found a record to remove.

Prevent duplicate submissions to BugCrowd

Occasionally we get duplicate domain protect issues created in BugCrowd (as we are an organisation using the optional BugCrowd integration).

Prevent this by using the BugCrowd API to check whether there is already an unresolved issue for that domain in BugCrowd before raising a new issue.

KeyError: 'ResourceRecords' Issue in manual_scans/aws/aws-cname-s3.py

During the scan record set 'r' sometimes has the following structure
{'Name': 'sub.domain.com.', 'Type': 'CNAME', 'AliasTarget': {'HostedZoneId': 'SOMEZONEID', 'DNSName': 'otherasubdomain.domain.com.', 'EvaluateTargetHealth': False}}
Which causes error KeyError: 'ResourceRecords and crash script.
The possible solution would be catch exception or check if ResourceRecords exist in andvance.

The issue occurs on the line
and "amazonaws.com" in r["ResourceRecords"][0]["Value"]

The issue is the same for all scripts
aws-cname*.py

I added check on line 51 as a fast fix, but not sure that we should ignore Alias records.

if r["Type"] in ["CNAME"] and r.get("ResourceRecords")

False positive in Elastic Beanstalk Alias detection

AWS do not allow specifying domains that start with eba- when creating a elastic beanstalk application. It will return the error Value eba-8cahe6rt at 'CNAMEPrefix' failed to satisfy constraint: Member Don't start domain name with 'eba-' (reserved).
Therefore, dangling records for elastic beanstalk domains that start with eba- should be filtered out of the results. I suggest marking them as a warning as the DNS record should be cleaned up but it has no security implication.

bug in takeover process for Elastic Beanstalk

There appears to be a bug in the takeover process for Elastic Beanstalk. When creating an Elastic Beanstalk environment, you have the option to either specify your own domain or let Elastic Beanstalk create one for you.

If you choose to let Elastic Beanstalk auto-create the domain for the environment, you will get something like my-eb-app-env.some-random-string.us-east-1.elasticbeanstalk.com.

When Domain Protect's takeover process runs the Cloudformation stack to attempt to takeover the orphaned Elastic Beanstalk environment, the following error is generated, because the "DomainName" parameter sent to the Cloudformation stack includes the auto-generated subdomain which contains a period. I think this can be resolved by stripping off the subdomain.

Resource handler returned message: "Value my-eb-app-env.eba-ic56qmpr at 'CNAMEPrefix' failed to satisfy constraint: Member must contain only letters, digits, and the dash character and may not start or end with a dash (Service: ElasticBeanstalk, Status Code: 400)

Slack App emojis showing on laptop but not mobile

If the Slack App option is selected (rather than a legacy webhook), Slack messages are fine on a laptop.

However on a mobile, emojis in the title of a Slack app message are not displayed and instead we see the text, e.g. :warning: instead of ⚠️

  • mobile

IMG_4582

  • laptop

image

If S3 takeover fails the first time, subsequent attempts for the same S3 bucket always fail

If S3 takeover fails the first time, subsequent attempts for the same S3 bucket always fail.
This is because on the first attempt, a CloudFormation template is created with a name derived from the bucket and region.

If that fails, then any subsequent takeover attempts will also fail, as a CloudFormation template already exists with the same name.

Proposed solution is to add a short random suffix to the CloudFormation template name to prevent such conflicts.

Error handling

The lambdas seems to indicate that it assumed role into the org account even if there is access denied error.

For example, I forgot to add the org account ID to terrafrom variables and the logs showed that the lambda assumed role. After updating the file with the ID my trust policy wasn't set right so they failed to assume the role but the logs also show that the role was assumed.

[ERROR]	2021-07-08T19:33:01.162Z	aa4701c8-0aa1-4a89-9fa7-feae380c7590	ERROR: Failed to assume domain-protect-audit role in AWS account 828414645366
Traceback (most recent call last):
  File "/var/task/cname-eb.py", line 25, in assume_role
    assumed_role_object = stsclient.assume_role(RoleArn = security_audit_role_arn, RoleSessionName = project)
  File "/var/runtime/botocore/client.py", line 357, in _api_call
    return self._make_api_call(operation_name, kwargs)
  File "/var/runtime/botocore/client.py", line 676, in _make_api_call
    raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the AssumeRole operation: User: arn:aws:sts::yyyyyyyyyyyy:assumed-role/domain-protect-lambda-default/domain-protect-cname-eb-default is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::xxxxxxxxxxxx:role/domain-protect-audit

Test for vulnerable CNAME to Azure Front Door

Azure Front Door is a service which can be taken over if a dangling DNS CNAME entry isn't deleted when the Azure Front Door is destroyed. Azure Front Door is configured with e.g. uniquesubdomain.azurefd.net, if that domain doesn't exist, there is a distinctive response:

image

Implement test using requests module, this will be similar to the test for a missing S3 bucket, but look for Oops! We weren't able to find your Azure Front Door Service configuration.

Should be implemented as a new function in https://github.com/domain-protect/domain-protect/blob/main/utils/utils_requests.py

Avoid suspended accounts

Suggest adding a few lines of code to list_accounts in utils_aws.py, to avoid failing on SUSPENDED accounts:

    boto3_session = assume_role(org_primary_account)
    client = boto3_session.client(service_name="organizations")

    accounts_list = []

    try:
        paginator_accounts = client.get_paginator("list_accounts")
        pages_accounts = paginator_accounts.paginate()
        for page_accounts in pages_accounts:
            accounts = page_accounts["Accounts"]
            for account in accounts:
                if account['Status'] != 'SUSPENDED':
                    accounts_list = accounts_list + [account]

        return accounts_list

    except Exception:
        logging.exception(
            "ERROR: Unable to list AWS accounts across organization with primary account %a", org_primary_account
        )

    return []```

Doesn't support wildcards on CNAME

Route53 supports wildcards for all records types https://aws.amazon.com/route53/faqs/#Support_for_wildcard_entries.
When you have a wildcard in a CNAME, domain-protect fails with the following error:

[ERROR] InvalidURL: Invalid URL 'https://\052.sudomain.domain.com.': No host supplied
Traceback (most recent call last):  
File "/var/task/scan.py", line 275, in lambda_handler
    cname_cloudfront_s3(account_name, record_sets, account_id)  
File "/var/task/scan.py", line 159, in cname_cloudfront_s3
    result = vulnerable_storage(domain)  
File "/var/task/utils/utils_requests.py", line 10, in vulnerable_storage
    response = requests.get("https://" + domain_name, timeout=https_timeout)  
File "/var/task/requests/api.py", line 75, in get
    return request('get', url, params=params, **kwargs)  
File "/var/task/requests/api.py", line 61, in request
    return session.request(method=method, url=url, **kwargs)  
File "/var/task/requests/sessions.py", line 528, in request
    prep = self.prepare_request(req)  
File "/var/task/requests/sessions.py", line 456, in prepare_request
    p.prepare(  
File "/var/task/requests/models.py", line 316, in prepare
    self.prepare_url(url, params)  
File "/var/task/requests/models.py", line 393, in prepare_url
    raise InvalidURL("Invalid URL %r: No host supplied" % url)

Getting error AttributeError: module 'enum' has no attribute 'IntFlag'

When i run
-aws-cname-cloudfront-s3.py
-aws-cname-s3.py
-aws-ns-domain.py
-aws-ns-subdomain.py

I get the below error

 File "/home/homefolder/.local/lib/python3.5/site-packages/dns/resolver.py", line 32, in <module>
    import dns.flags
  File "/home/homefolder/.local/lib/python3.5/site-packages/dns/flags.py", line 24, in <module>
    class Flag(enum.IntFlag):
AttributeError: module 'enum' has no attribute 'IntFlag'

Update Lambda doesn't check for fixed registered domain vulnerability

The update Lambda doesn't check for a fixed registered domain vulnerability.

If there's a vulnerable registered domain which is then fixed by deleting or correctly configuring name servers, there's currently no update of the database to show the domain as fixed, and no notification.

Slack webhook token is a secret

Hi,

I was a looking at the code, and I saw slack webhook url is not being stored as secret here and here. It is being stored as environment variable in lambda function, and for companies that uses atlantis to deploy infrastructure, the terraform.tfvars is also committed. Gitleaks detects slack webhook urls as secrets, and it also says here as well https://api.slack.com/messaging/webhooks#:~:text=Your%20webhook%20URL%20contains%20a,out%20and%20revokes%20leaked%20secrets.

I would say we should use something like the function below to retrieve the secret from SSM Parameter store or secret manager

def get_secret(secret_name):
    ssm_client = boto3.client('ssm')

    try:
        response = ssm_client.get_parameter(
            Name=secret_name,
            WithDecryption=True,
        )
        return response['Parameter']['Value']
    except Exception as e:
        lambda_logger.error(f"Failed to retrieve secret {secret_name} because {e}")
    return

isException global var use

the code below is copied from the manual scan folder and it uses "isExeption" variable to check if an exception is thrown when DNS lookup is performed. But all classes(e.g aws-cname-eb.py declaring the field do not set it to true when an exception is thrown. Am I missing something or is this a bug?

 elif (result==False) and (isException==True):
       suspectedDomains.append(cname_record)
       my_print(str(i)+". "+cname_record,"INFOB")
       my_print(exception_message, "INFO")

also in aws-cname-eb.py shouldn't line 86 return false. since the domain has been successfully resolved?.

"No module named 'utils'" when running AWS manual scans

Followed install instructions, tried on Python 3.11 and Python 3.9. I get this error:

$ python3 -m venv venv && source venv/bin/activate
$ pip install -r manual_scans/aws/requirements.txt
$ python manual_scans/aws/aws-alias-cloudfront-s3.py
Traceback (most recent call last):
  File "/private/tmp/domain-protect/manual_scans/aws/aws-alias-cloudfront-s3.py", line 7, in <module>
    from utils.utils_aws_manual import list_hosted_zones_manual_scan
ModuleNotFoundError: No module named 'utils'

[Feature] Support for event driven scanning for Route53 based domains

I have a use case where the route53 service is used by multiple consumers where they compete against each other in getting hold of the maximum call seats per second. This leads to multiple throttling scenarios for each of the consumer.

My proposal is to provide an event driven scanning feature for domain-protect where the lambda be triggered based on a cloudwatch event rule as given below :-

{
  "source": ["aws.route53"],
  "detail-type": ["AWS API Call via CloudTrail"],
  "detail": {
    "eventSource": ["route53.amazonaws.com"],
    "eventName": ["ChangeResourceRecordSets", "CreateHostedZone","DeleteHostedZone"]
  }
}

Need for the above mentioned event :-

  1. ChangeResourceRecordSets - This is to monitor modifications in Zone RecordSets such as UPSERT, DELETE and CREATE.
  2. CreateHostedZone - This is to monitor new hosted zone. May not be significant from scanning standpoint, but one can have it recorded as part of dynamodb for a record.
  3. DeleteHostedZone - To monitor deletion of a hosted zone.

This will be much more real time, as it is triggered on every change in the environment. Also the event json will have almost all details which in turn will avoid scanning of all AWS accounts every time.

This feature will be really helpful at least for me as this will help not to rely on querying route53 API service directly resulting in throttling of the service for all other consumers including domain-protect.

Note:-
The setup of event forwarding from external accounts to domain protect hosted account could be outside domain-protect deploy's responsibility.

Sample Diagram :-

proposal-domainprotect-sqs

[EDIT]

I realised very late that the event driven setup for this may not be really suitable as the state of the domain is also dependent on external factors and hence would be realistic to monitor only on a schedule basis. Hence closing this ticket.

Deployment fails on seemingly hard-coded eu-west-1 call for DynamoDB setup

Greetings, and thank you for this project. I've instantiated a manual deploy multiple times, and cannot determine how to resolve this error:

│ Error: creating Amazon DynamoDB Table (DomainProtectIPsDev): AccessDeniedException: User: arn:aws:sts:::assumed-role/-oidc-github/GitHubActions is not authorized to perform: dynamodb:CreateTable on resource: arn:aws:dynamodb:eu-west-1:***:table/DomainProtectIPsDev because no identity-based policy allows the dynamodb:CreateTable action
│ status code: 400, request id: KMLJVOFO3A6GMK8I9OJO1QL8HFVV4KQNSO5AEMVJF66Q9ASUAAJG

My secrets are set; region, aws_region, allowed_regions, and tf_state_region are all set to us-east-1:
Repository secrets ALLOWED_REGIONS Updated 3 days ago AWS_DEPLOY_ROLE_ARN Updated 4 days ago AWS_REGION Updated 3 days ago ORG_PRIMARY_ACCOUNT Updated 4 days ago REGION Updated 11 minutes ago SECURITY_AUDIT_ROLE_NAME Updated 3 days ago SLACK_CHANNELS Updated 3 days ago SLACK_CHANNELS_DEV Updated 3 days ago SLACK_WEBHOOK_URLS Updated 3 days ago SLACK_WEBHOOK_URLS_DEV Updated 3 days ago TERRAFORM_STATE_BUCKET Updated 4 days ago TERRAFORM_STATE_KEY Updated 4 days ago TERRAFORM_STATE_REGION Updated 3 days ago

tfvars updated as prescribed:
`env:
TF_VAR_org_primary_account: ${{ secrets.ORG_PRIMARY_ACCOUNT }}
TF_VAR_slack_channels: ${{ secrets.SLACK_CHANNELS }}
TF_VAR_slack_channels_dev: ${{ secrets.SLACK_CHANNELS_DEV }}
TF_VAR_slack_webhook_urls: ${{ secrets.SLACK_WEBHOOK_URLS }}
TF_VAR_slack_webhook_urls_dev: ${{ secrets.SLACK_WEBHOOK_URLS_DEV }}
TF_VAR_slack_webhook_type: "app"
TF_VAR_external_id: ${{ secrets.EXTERNAL_ID }}

TF_VAR_cloudflare: true

TF_VAR_cf_api_key: ${{ secrets.CF_API_KEY }}

TF_VAR_ip_address: true
TF_VAR_ip_time_limit: "0.1"
TF_VAR_allowed_regions: "['us-east-1', 'us-east-2']"
TF_VAR_scan_schedule: "10 minutes"
TF_VAR_update_schedule: "10 minutes"
TF_VAR_ip_scan_schedule: "10 minutes"

TF_VAR_hackerone: "enabled"

TF_VAR_hackerone_api_token: ${{ secrets.HACKERONE_API_TOKEN }}`

Searching the repo, I only found one instance of 'eu-west-1' and it is the "default" region for Lambda creation; no mention of DynamoDB. I've failed to get this deployed over several days — can you please advise on what I'm missing?

Check for ResourceRecords in DNS record sets

In the scan lambda (scan.py) there are several checks on ["ResourceRecords"][0]["Value"] which fails for odd DNS entries. You will need to sprinkle and 'ResourceRecords' in r in the relevant record_sets_filtered (or filter them in list_resource_record_sets)- e.g.

    record_sets_filtered = [
        r
        for r in record_sets
        if r["Type"] in ["CNAME"]
        and 'ResourceRecords' in r
        and any(vulnerability in r["ResourceRecords"][0]["Value"] for vulnerability in vulnerability_list)
    ]

ERROR: Failed to assume domain-protect-audit role in AWS account

For some reason all lambdas can't assume the role I did create this role in all accounts and I am assuming the lambdas would construct this dynamically after assuming role into the org account and getting all accounts ID -- Does this solution assume that we deploy the solution in the org account or security account?

[ERROR]	2021-07-08T18:28:07.025Z	ce12dc73-4302-453b-adc3-b3fbc9ab3eeb	ERROR: Failed to assume domain-protect-audit role in AWS account 
Traceback (most recent call last):
  File "/var/task/alias-eb.py", line 25, in assume_role
    assumed_role_object = stsclient.assume_role(RoleArn = security_audit_role_arn, RoleSessionName = project)
  File "/var/runtime/botocore/client.py", line 357, in _api_call
    return self._make_api_call(operation_name, kwargs)
  File "/var/runtime/botocore/client.py", line 676, in _make_api_call
    raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the AssumeRole operation: User: arn:aws:sts::12345678910:assumed-role/domain-protect-lambda-default/domain-protect-alias-eb-default is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam:::role/domain-protect-audit

Update scan sometimes incorrectly reports NS records as fixed

The update scan sometimes incorrectly reports NS records as fixed.
Following recent improvements to logging, this is due to the DNS lookup itself in the vulnerable_ns function.
It's not due to the dns_deleted function.

A possible fix may be to get all the nameservers of the delegated subdomain and query each one in turn.

AWS CloudFront with deleted S3 bucket no longer detected

Previously, AWS CloudFront distributions with a missing S3 bucket were detected by looking for NoSuchBucket in the response, which also included the name of the missing S3 bucket.

However, exactly the same AWS CloudFront configuration now results in the following response:
image

I'm guessing that this is most likely a security improvement implemented recently by AWS.

Many thanks to @christophetd for highlighting this issue!

resource_type uninitialised for some resources

In lambda-slack notify.py resources_message assumes all keys are present, that is unfortunately not the case.
Suggest you initialise all variables, to avoid failing if any are missing.
Adding this line at the beginning of the function, solves the problem

resource_name = resource_type = takeover_account = vulnerable_account = vulnerable_domain = ""

Support using system configured dns resolvers

Considering security best practices is to block unknown dns resolvers within corp networks as well as disallow bypassing corp dns name servers, the hardcoding of the google dns resolvers from #155 prevents using this within a lot of corporate environments.

Perhaps supporting an argument to specify dns nameservers would be a nice feature to have?

Domain Protect doesn't detect A record fixed when standard A record is changed to an Alias A record.

Domain Protect doesn't detect that an A record is fixed when that fix is implemented by editing a standard A record to change it to an Alias A record.

It would be challenging to develop this feature, and a significant processing overhead, because it would require a complete scan of all Route53 records to see if there's a matching record, and then to analyse whether it's an alias record.

It's such an edge case that I don't believe it's worth doing this, the workaround is to edit the DynamoDB database to say that this vulnerable domain is now fixed.

Update scan doesn't detect A records fixed by deleting record and star record existing

It looks like the only way domain-protect detects if an A record was deleted is by trying to actually resolve it:

def dns_deleted(domain_name, record_type="A"):
# DNS record type examples: A, CNAME, MX, NS
try:
myresolver.resolve(domain_name, record_type)
except (resolver.NoAnswer, resolver.NXDOMAIN):
print(f"DNS {record_type} record for {domain_name} no longer found")
return True
except (resolver.NoNameservers, resolver.NoResolverConfiguration, resolver.Timeout):
return False
return False

In our case, we have a star record in place that means that if we delete a vulnerable record, it will still resolve due to the star record.

For example:

  1. We have a star record in place such as *.example.com pointing to non-vulnerable IP.
  2. domain-protect scans vulnerable.example.com, marks it as vulnerable.
  3. We delete the vulnerable vulnerable.example.com A record
  4. domain-protect doesn't correctly identify it as fixed since it doesn't think it was deleted since it still resolves

Output no domain found in this account error

The domain-protect-ns-domain-default lambda should print a message that let the admin know that a given account doesn' have registered domains. Not a dealbreaker, but good to have and easy to do as the script has all the information needed.

Update scan doesn't detect A records fixed by changing to new IP address

The update scan checks whether domains are still vulnerable.
In the case of A records, it successfully detects fixed domains when:

  • the domain is deleted
  • the domain is no longer vulnerable because the IP address is within our AWS org

In that case, the record in the database is marked as fixed.

However, in the case that the DNS record is changed to point to another IP address, also not within our AWS org, then the database record isn't updated with the new IP address, so the daily report of vulnerable domains appears misleading.

This behaviour should be improved by either:

  • updating the resource record in the database to the new IP address, or
  • marking as fixed and allowing the next regular scan to determine whether the new IP address is in our org or not

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.