Giter Site home page Giter Site logo

disney / terraform-aws-kinesis-firehose-splunk Goto Github PK

View Code? Open in Web Editor NEW
68.0 68.0 38.0 168 KB

This code creates/configures a Kinesis Firehose in AWS to send CloudWatch log data to Splunk.

Home Page: http://disney.github.io/

License: Other

JavaScript 23.95% HCL 76.05%

terraform-aws-kinesis-firehose-splunk's People

Contributors

bogdannazarenko avatar chrisputonen avatar kevinkuszyk avatar mlcooper avatar phundisk avatar shawnucd avatar spr-mweber3 avatar tlopo avatar yzargari avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-aws-kinesis-firehose-splunk's Issues

Getting promise is not a function error on lambda execution logs

Hi there,

I was getting error promise is not a function from https://github.com/disney/terraform-aws-kinesis-firehose-splunk/blob/master/files/kinesis-firehose-cloudwatch-logs-processor.js#L131:

const response = await client[methodName](args).promise()

While debugging I printed the return of client[methodName](args) and it was of type promise, so I changed it to:

const response = await client[methodName](args)

It solved my problem, so I thought I would let you guys know.

The argument "region" is required, but was not set.

Hello,

I'm receiving an error ("The argument "region" is required, but was not set.") when invoking terraform refresh against a .tf with the following:

module "kinesis_firehose" {
  source = "disney/kinesis-firehose-splunk/aws"
  region = "us-east-1"
  arn_cloudwatch_logs_to_ship = "arn:aws:logs:us-east-1:<client>:log-group:/var/log/messages:*"  
  name_cloudwatch_logs_to_ship = "/var/log/messages"
  hec_token = "<encrypted HEC token>"
  kms_key_arn = "arn:aws:kms:us-east-1:<client>:key/<key>"
  hec_url = "<Splunk_Kinesis_ingest_URL>"
  s3_bucket_name = "var_log_messages"
}

This error persists even when I set an env var as TF_VAR_region="us-east-1".

Here is TRACE output grepped to region:

  module.kinesis_firehose.var.region (expand) - *terraform.nodeExpandModuleVariable
  module.kinesis_firehose.var.region (expand) - *terraform.nodeExpandModuleVariable
  module.kinesis_firehose.var.region (expand) - *terraform.nodeExpandModuleVariable
  module.kinesis_firehose.var.region (expand) - *terraform.nodeExpandModuleVariable
2021-11-14T18:35:10.798-0500 [TRACE] ModuleExpansionTransformer: module.kinesis_firehose.var.region (expand) must wait for expansion of module.kinesis_firehose
    module.kinesis_firehose.var.region (expand) - *terraform.nodeExpandModuleVariable
  module.kinesis_firehose.var.region (expand) - *terraform.nodeExpandModuleVariable
2021-11-14T18:35:10.836-0500 [DEBUG] ReferenceTransformer: "module.kinesis_firehose.var.region (expand)" references: []
2021-11-14T18:35:10.836-0500 [DEBUG] ReferenceTransformer: "module.kinesis_firehose.aws_iam_role.cloudwatch_to_firehose_trust (expand)" references: [module.kinesis_firehose.var.cloudwatch_to_firehose_trust_iam_role_name (expand) module.kinesis_firehose.var.region (expand)]
    module.kinesis_firehose.var.region (expand) - *terraform.nodeExpandModuleVariable
    module.kinesis_firehose.var.region (expand) - *terraform.nodeExpandModuleVariable
  module.kinesis_firehose.var.region (expand) - *terraform.nodeExpandModuleVariable
    module.kinesis_firehose.var.region (expand) - *terraform.nodeExpandModuleVariable
    module.kinesis_firehose.var.region (expand) - *terraform.nodeExpandModuleVariable
    module.kinesis_firehose.var.region (expand) - *terraform.nodeExpandModuleVariable
  module.kinesis_firehose.var.region (expand) - *terraform.nodeExpandModuleVariable
    module.kinesis_firehose.var.region (expand) - *terraform.nodeExpandModuleVariable
    module.kinesis_firehose.var.region (expand) - *terraform.nodeExpandModuleVariable
    module.kinesis_firehose.var.region (expand) - *terraform.nodeExpandModuleVariable
  module.kinesis_firehose.var.region (expand) - *terraform.nodeExpandModuleVariable
    module.kinesis_firehose.var.region (expand) - *terraform.nodeExpandModuleVariable
    module.kinesis_firehose.var.region (expand) - *terraform.nodeExpandModuleVariable
    module.kinesis_firehose.var.region (expand) - *terraform.nodeExpandModuleVariable
  module.kinesis_firehose.var.region (expand) - *terraform.nodeExpandModuleVariable
    module.kinesis_firehose.var.region (expand) - *terraform.nodeExpandModuleVariable
  module.kinesis_firehose.var.region (expand) - *terraform.nodeExpandModuleVariable
2021-11-14T18:35:10.907-0500 [TRACE] vertex "module.kinesis_firehose.var.region (expand)": starting visit (*terraform.nodeExpandModuleVariable)
2021-11-14T18:35:10.930-0500 [TRACE] vertex "module.kinesis_firehose.var.region (expand)": expanding dynamic subgraph
2021-11-14T18:35:10.963-0500 [TRACE] vertex "module.kinesis_firehose.var.region (expand)": entering dynamic subgraph
2021-11-14T18:35:10.965-0500 [TRACE] vertex "module.kinesis_firehose.var.region": starting visit (*terraform.nodeModuleVariable)
2021-11-14T18:35:10.966-0500 [TRACE] evalVariableValidations: not active for module.kinesis_firehose.var.region, so skipping
2021-11-14T18:35:10.969-0500 [TRACE] vertex "module.kinesis_firehose.var.region": visit complete
2021-11-14T18:35:10.970-0500 [TRACE] vertex "module.kinesis_firehose.var.region (expand)": dynamic subgraph completed successfully
2021-11-14T18:35:10.972-0500 [TRACE] vertex "module.kinesis_firehose.var.region (expand)": visit complete
? The argument "region" is required, but was not set.

Note that I'm using Windows and have changed the EOL char to \n.

Any ideas?

Thanks,

Matt

Better support for multiple CloudWatch Log Groups

Hi,
I've tried using the module the following way:

module "kinesis_firehose" {
  for_each                     = aws_cloudwatch_log_group.cloudwatch_log_group
  source                       = "disney/kinesis-firehose-splunk/aws"
  version                      = "8.0.0"
  region                       = local.region  
  arn_cloudwatch_logs_to_ship  = "arn:aws:logs:${local.region}:${local.account_id}:log-group:${each.value.name}:*"
  name_cloudwatch_logs_to_ship = each.value.name
  hec_url                      = "https://......"
  s3_bucket_name               = "${each.value.name}-bucket"
  hec_token                    = var.splunk_hec_token
}

It works great for just one log group.....
But when I try to use the module with multiple log groups, I have to add the following configuration so they will be unique and the Terraform apply will not fail:

  firehose_name                = "kinesis-firehose-to-splunk-${each.key}"
  kinesis_firehose_lambda_role_name = "KinesisFirehoseToLambaRole-${each.key}"
  kinesis_firehose_iam_policy_name = "KinesisFirehose-Policy-${each.key}"
  cloudwatch_to_firehose_trust_iam_role_name = "CloudWatchToSplunkFirehoseTrust-${each.key}"
  lambda_function_name = "kinesis-firehose-transform-${each.key}"
  kinesis_firehose_role_name = "KinesisFirehoseRole-${each.key}"
  lambda_iam_policy_name = "Kinesis-Firehose-to-Splunk-Policy-${each.key}"
  cloudwatch_to_fh_access_policy_name = "KinesisCloudWatchToFirehosePolicy-${each.key}"
  cloudwatch_log_filter_name = "KinesisSubscriptionFilter-${each.key}"
  log_stream_name = "SplunkDelivery-${each.key}"

While adding firehose_name makes sense (as a different FH data stream will be created for each log group), some doesn't make sense.
For example, the roles, policies and Lambda function can be reused instead of creating multiple instances of them.

@mlcooper Any idea how to achieve that ?

Adding support for custom lambda scripts

I'm working on some aws lambdas that we want to forward logging to splunk for. The only thing is we need to customize the transform lambda function script some. I've cloned the repo and made changes that should allow this, but I don't seem to be able to push the changes. Would you be able to give me permission to create a branch and push?

Warning on aws_s3_bucket.kinesis_firehose_s3_bucket

Was able to get the KMS Key created and HEC token encrypted. Running "terraform plan" succeeds, but I get this warning:

│ Warning: Argument is deprecated

│ with module.kinesis_firehose.aws_s3_bucket.kinesis_firehose_s3_bucket,
│ on .terraform/modules/kinesis_firehose/main.tf line 56, in resource "aws_s3_bucket" "kinesis_firehose_s3_bucket":
│ 56: resource "aws_s3_bucket" "kinesis_firehose_s3_bucket" {

│ Use the aws_s3_bucket_server_side_encryption_configuration resource instead
│ (and 3 more similar warnings elsewhere)

Have cloned the repo, and will try to update to the new resource, but just wanted to pass this along.

Error with the creation of S3 bucket

Error: error creating S3 bucket ACL for foobar-bucket: AccessControlListNotSupported: 
The bucket does not allow ACLs
│   with module.kinesis_firehose.aws_s3_bucket_acl.kinesis_firehose_s3_bucket,
│   on .terraform/modules/kinesis_firehose/main.tf line 95, in resource "aws_s3_bucket_acl" "kinesis_firehose_s3_bucket":
│   95: resource "aws_s3_bucket_acl" "kinesis_firehose_s3_bucket" {

HEC Token Encryption Method Help

Looking to use this module, but the documentation link for encrypting the HEC token with the KMS key no longer exists. Can you please provide details or a link to documentation on how to properly encrypt the HEC token for use with the module? I've been looking but I haven't found a good way to do this yet.

Upgrade s3

For s3 bucket, you need to add "Bucket Key" & "Object Lock" parameters. As well as the ability to automatically delete files through the "Lifecycle rules"

The default runtime version of nodejs for Lambda function - nodejs12.x - is no longer supported

Error: creating Lambda Function (kinesis-firehose-transform): operation error Lambda: CreateFunction, 
https response error StatusCode: 400, InvalidParameterValueException: The runtime parameter 
of nodejs12.x is no longer supported for creating or updating AWS Lambda functions. 
We recommend you use the new runtime (nodejs18.x) while creating or updating functions.

The configuration should use nodejs18.x.
I managed to get it working only with node14.x.

Is `arn_cloudwatch_logs_to_ship` must be required ?

Hi,

ATM, the use of both arn_cloudwatch_logs_to_ship and name_cloudwatch_logs_to_ship are required.

Can arn_cloudwatch_logs_to_ship be made optional if we provide name_cloudwatch_logs_to_ship ?

The region can be configured by the module using the provider configuration.
The account ID can be fetched by the module this way:

data "aws_caller_identity" "current" {}

@mlcooper WDYT?

Different formatting when hec_endpoint_type=`Event` ?

Hi,
Shouldn't the logs be formatted differently if "hec_endpoint_type" is set to Event instead of the default Raw ?

I got the following error when I set my configuration to Event

The data is not formatted correctly. To see how to properly format data for Raw or Event HEC endpoints, see Splunk Event Data (http://dev.splunk.com/view/event-collector/SP-CAAAE6P#data). HecServerErrorResponseException{serverRespObject=HecErrorResponseValueObject{text=Invalid data format, code=6, invalidEventNumber=0}, httpBodyAndStatus=HttpBodyAndStatus{statusCode=400, body={"text":"Invalid data format","code":6,"invalid-event-number":0}}, lifecycleType=EVENT_POST_NOT_OK, url=https://44.194.107.82:443, errorType=RECOVERABLE_DATA_ERROR, context=event_post}

Can anyone confirm it works with hec_endpoint_type set to Event ?

This might help the documentaiton for SplunkCloud customers, it took me a minute to figure out.

If you're a Splunkcloud customer, once you've successfully deployed all the resources you'll need to ensure that your Splunkcloud instance has the Kinesis Data Firehose egress CIDRs allow listed under "Server Settings" > "IP Allow List Management" > "HEC access for ingestion"

For more details on the relevant CIDRs: https://docs.aws.amazon.com/firehose/latest/dev/controlling-access.html#using-iam-splunk-vpc

Enable Cloudwatch Logs Access From Multiple Regions

Currently var.region variable only used in Kinesis Firehose IAM Role Trust Policy to enable logs from particular regions.
We would like to enable CW Subscription Filters from multiple regions to "push" to Kinesis in particular region.

ACL Error on Terraform

Hi, I am trying to use this module to send cloudwatch logs to Splunk

Facing this error
│ Error: error creating S3 bucket ACL for doe-cdi-tf-splunk-bluebird: AccessControlListNotSupported: The bucket does not allow ACLs
│ status code: 400, request id: PQWWZAJS8Z77QR4N, host id: BkjYWmWoI1Aucg76Az0+GCVGRoAWb15VNSr1NSxL8z/QAbqpNt6IMjLQpZLfuBCr4FJ7Rf5EEg4=

│ with module.kinesis_firehose.aws_s3_bucket_acl.kinesis_firehose_s3_bucket,
│ on .terraform/modules/kinesis_firehose/main.tf line 95, in resource "aws_s3_bucket_acl" "kinesis_firehose_s3_bucket":
│ 95: resource "aws_s3_bucket_acl" "kinesis_firehose_s3_bucket" {



│ Error: error creating Lambda Function (1): AccessDeniedException:
│ status code: 403, request id: e4847f35-2362-4f23-bc4f-881b4fa684b5

│ with module.kinesis_firehose.aws_lambda_function.firehose_lambda_transform,
│ on .terraform/modules/kinesis_firehose/main.tf line 249, in resource "aws_lambda_function" "firehose_lambda_transform":
│ 249: resource "aws_lambda_function" "firehose_lambda_transform" {

Cannot use count, for_each, or depends_on

I am working within a module that requires me to conditionally send logs to splunk depending on the environment. When attempting to use the count variable, I get the following:

╷
│ Error: Module module.kinesis_firehose_splunk contains provider configuration
│ 
│ Providers cannot be configured within modules using count, for_each or
│ depends_on.
╵

Related module declaration:

module "kinesis_firehose" {
  source  = "disney/kinesis-firehose-splunk/aws"
  version = "4.0.0"
  count   = var.splunk_enabled ? 1 : 0
  ...
}

Would it be possible to remove the provider configuration from this module?

Nodejs 20 runtime introduces new behavior that causes lambda to fail

Just ran into this today using an unpinned version of the module, when I would attempt to run the lambda I would get an error message like """[SyntaxError: Cannot use import statement outside a module]"""

I noted that the Nodejs runtime was 18.x on my working lambda and it was set to 20.x on the new lambda that got setup.

I googled a bit and found the solution was to add a package.json to the lambda with the following contents:

{
    "type": "module"
}

That seemed to get things working. I'll look at module source code and see if I can come up with a patch..

Allow the HEC token to be passed as a plaintext or as a Parameter Store.

Allow the HEC token to be passed as a plaintext or as a Parameter Store.

AWS displays the HEC token in the Kinesis Firehose settings in a very simple and clear way. It is recommended to remove the requirement: create a KMS key, encrypt the text with it, and only then pass it as an input parameter.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.