disney / terraform-aws-kinesis-firehose-splunk Goto Github PK
View Code? Open in Web Editor NEWThis code creates/configures a Kinesis Firehose in AWS to send CloudWatch log data to Splunk.
Home Page: http://disney.github.io/
License: Other
This code creates/configures a Kinesis Firehose in AWS to send CloudWatch log data to Splunk.
Home Page: http://disney.github.io/
License: Other
Hi there,
I was getting error promise is not a function
from https://github.com/disney/terraform-aws-kinesis-firehose-splunk/blob/master/files/kinesis-firehose-cloudwatch-logs-processor.js#L131:
const response = await client[methodName](args).promise()
While debugging I printed the return of client[methodName](args)
and it was of type promise, so I changed it to:
const response = await client[methodName](args)
It solved my problem, so I thought I would let you guys know.
Hello,
I'm receiving an error ("The argument "region" is required, but was not set.") when invoking terraform refresh
against a .tf with the following:
module "kinesis_firehose" {
source = "disney/kinesis-firehose-splunk/aws"
region = "us-east-1"
arn_cloudwatch_logs_to_ship = "arn:aws:logs:us-east-1:<client>:log-group:/var/log/messages:*"
name_cloudwatch_logs_to_ship = "/var/log/messages"
hec_token = "<encrypted HEC token>"
kms_key_arn = "arn:aws:kms:us-east-1:<client>:key/<key>"
hec_url = "<Splunk_Kinesis_ingest_URL>"
s3_bucket_name = "var_log_messages"
}
This error persists even when I set an env var as TF_VAR_region="us-east-1"
.
Here is TRACE output grepped to region
:
module.kinesis_firehose.var.region (expand) - *terraform.nodeExpandModuleVariable
module.kinesis_firehose.var.region (expand) - *terraform.nodeExpandModuleVariable
module.kinesis_firehose.var.region (expand) - *terraform.nodeExpandModuleVariable
module.kinesis_firehose.var.region (expand) - *terraform.nodeExpandModuleVariable
2021-11-14T18:35:10.798-0500 [TRACE] ModuleExpansionTransformer: module.kinesis_firehose.var.region (expand) must wait for expansion of module.kinesis_firehose
module.kinesis_firehose.var.region (expand) - *terraform.nodeExpandModuleVariable
module.kinesis_firehose.var.region (expand) - *terraform.nodeExpandModuleVariable
2021-11-14T18:35:10.836-0500 [DEBUG] ReferenceTransformer: "module.kinesis_firehose.var.region (expand)" references: []
2021-11-14T18:35:10.836-0500 [DEBUG] ReferenceTransformer: "module.kinesis_firehose.aws_iam_role.cloudwatch_to_firehose_trust (expand)" references: [module.kinesis_firehose.var.cloudwatch_to_firehose_trust_iam_role_name (expand) module.kinesis_firehose.var.region (expand)]
module.kinesis_firehose.var.region (expand) - *terraform.nodeExpandModuleVariable
module.kinesis_firehose.var.region (expand) - *terraform.nodeExpandModuleVariable
module.kinesis_firehose.var.region (expand) - *terraform.nodeExpandModuleVariable
module.kinesis_firehose.var.region (expand) - *terraform.nodeExpandModuleVariable
module.kinesis_firehose.var.region (expand) - *terraform.nodeExpandModuleVariable
module.kinesis_firehose.var.region (expand) - *terraform.nodeExpandModuleVariable
module.kinesis_firehose.var.region (expand) - *terraform.nodeExpandModuleVariable
module.kinesis_firehose.var.region (expand) - *terraform.nodeExpandModuleVariable
module.kinesis_firehose.var.region (expand) - *terraform.nodeExpandModuleVariable
module.kinesis_firehose.var.region (expand) - *terraform.nodeExpandModuleVariable
module.kinesis_firehose.var.region (expand) - *terraform.nodeExpandModuleVariable
module.kinesis_firehose.var.region (expand) - *terraform.nodeExpandModuleVariable
module.kinesis_firehose.var.region (expand) - *terraform.nodeExpandModuleVariable
module.kinesis_firehose.var.region (expand) - *terraform.nodeExpandModuleVariable
module.kinesis_firehose.var.region (expand) - *terraform.nodeExpandModuleVariable
module.kinesis_firehose.var.region (expand) - *terraform.nodeExpandModuleVariable
module.kinesis_firehose.var.region (expand) - *terraform.nodeExpandModuleVariable
2021-11-14T18:35:10.907-0500 [TRACE] vertex "module.kinesis_firehose.var.region (expand)": starting visit (*terraform.nodeExpandModuleVariable)
2021-11-14T18:35:10.930-0500 [TRACE] vertex "module.kinesis_firehose.var.region (expand)": expanding dynamic subgraph
2021-11-14T18:35:10.963-0500 [TRACE] vertex "module.kinesis_firehose.var.region (expand)": entering dynamic subgraph
2021-11-14T18:35:10.965-0500 [TRACE] vertex "module.kinesis_firehose.var.region": starting visit (*terraform.nodeModuleVariable)
2021-11-14T18:35:10.966-0500 [TRACE] evalVariableValidations: not active for module.kinesis_firehose.var.region, so skipping
2021-11-14T18:35:10.969-0500 [TRACE] vertex "module.kinesis_firehose.var.region": visit complete
2021-11-14T18:35:10.970-0500 [TRACE] vertex "module.kinesis_firehose.var.region (expand)": dynamic subgraph completed successfully
2021-11-14T18:35:10.972-0500 [TRACE] vertex "module.kinesis_firehose.var.region (expand)": visit complete
? The argument "region" is required, but was not set.
Note that I'm using Windows and have changed the EOL char to \n
.
Any ideas?
Thanks,
Matt
Hi,
I've tried using the module the following way:
module "kinesis_firehose" {
for_each = aws_cloudwatch_log_group.cloudwatch_log_group
source = "disney/kinesis-firehose-splunk/aws"
version = "8.0.0"
region = local.region
arn_cloudwatch_logs_to_ship = "arn:aws:logs:${local.region}:${local.account_id}:log-group:${each.value.name}:*"
name_cloudwatch_logs_to_ship = each.value.name
hec_url = "https://......"
s3_bucket_name = "${each.value.name}-bucket"
hec_token = var.splunk_hec_token
}
It works great for just one log group.....
But when I try to use the module with multiple log groups, I have to add the following configuration so they will be unique and the Terraform apply will not fail:
firehose_name = "kinesis-firehose-to-splunk-${each.key}"
kinesis_firehose_lambda_role_name = "KinesisFirehoseToLambaRole-${each.key}"
kinesis_firehose_iam_policy_name = "KinesisFirehose-Policy-${each.key}"
cloudwatch_to_firehose_trust_iam_role_name = "CloudWatchToSplunkFirehoseTrust-${each.key}"
lambda_function_name = "kinesis-firehose-transform-${each.key}"
kinesis_firehose_role_name = "KinesisFirehoseRole-${each.key}"
lambda_iam_policy_name = "Kinesis-Firehose-to-Splunk-Policy-${each.key}"
cloudwatch_to_fh_access_policy_name = "KinesisCloudWatchToFirehosePolicy-${each.key}"
cloudwatch_log_filter_name = "KinesisSubscriptionFilter-${each.key}"
log_stream_name = "SplunkDelivery-${each.key}"
While adding firehose_name
makes sense (as a different FH data stream will be created for each log group), some doesn't make sense.
For example, the roles, policies and Lambda function can be reused instead of creating multiple instances of them.
@mlcooper Any idea how to achieve that ?
I'm working on some aws lambdas that we want to forward logging to splunk for. The only thing is we need to customize the transform lambda function script some. I've cloned the repo and made changes that should allow this, but I don't seem to be able to push the changes. Would you be able to give me permission to create a branch and push?
Was able to get the KMS Key created and HEC token encrypted. Running "terraform plan" succeeds, but I get this warning:
│ Warning: Argument is deprecated
│
│ with module.kinesis_firehose.aws_s3_bucket.kinesis_firehose_s3_bucket,
│ on .terraform/modules/kinesis_firehose/main.tf line 56, in resource "aws_s3_bucket" "kinesis_firehose_s3_bucket":
│ 56: resource "aws_s3_bucket" "kinesis_firehose_s3_bucket" {
│
│ Use the aws_s3_bucket_server_side_encryption_configuration resource instead
│ (and 3 more similar warnings elsewhere)
Have cloned the repo, and will try to update to the new resource, but just wanted to pass this along.
Error: error creating S3 bucket ACL for foobar-bucket: AccessControlListNotSupported:
The bucket does not allow ACLs
│ with module.kinesis_firehose.aws_s3_bucket_acl.kinesis_firehose_s3_bucket,
│ on .terraform/modules/kinesis_firehose/main.tf line 95, in resource "aws_s3_bucket_acl" "kinesis_firehose_s3_bucket":
│ 95: resource "aws_s3_bucket_acl" "kinesis_firehose_s3_bucket" {
Looking to use this module, but the documentation link for encrypting the HEC token with the KMS key no longer exists. Can you please provide details or a link to documentation on how to properly encrypt the HEC token for use with the module? I've been looking but I haven't found a good way to do this yet.
For s3 bucket, you need to add "Bucket Key" & "Object Lock" parameters. As well as the ability to automatically delete files through the "Lifecycle rules"
Error: creating Lambda Function (kinesis-firehose-transform): operation error Lambda: CreateFunction,
https response error StatusCode: 400, InvalidParameterValueException: The runtime parameter
of nodejs12.x is no longer supported for creating or updating AWS Lambda functions.
We recommend you use the new runtime (nodejs18.x) while creating or updating functions.
The configuration should use nodejs18.x.
I managed to get it working only with node14.x.
Hi,
ATM, the use of both arn_cloudwatch_logs_to_ship
and name_cloudwatch_logs_to_ship
are required.
Can arn_cloudwatch_logs_to_ship
be made optional if we provide name_cloudwatch_logs_to_ship
?
The region can be configured by the module using the provider configuration.
The account ID can be fetched by the module this way:
data "aws_caller_identity" "current" {}
@mlcooper WDYT?
Hi,
Shouldn't the logs be formatted differently if "hec_endpoint_type" is set to Event
instead of the default Raw
?
I got the following error when I set my configuration to Event
The data is not formatted correctly. To see how to properly format data for Raw or Event HEC endpoints, see Splunk Event Data (http://dev.splunk.com/view/event-collector/SP-CAAAE6P#data). HecServerErrorResponseException{serverRespObject=HecErrorResponseValueObject{text=Invalid data format, code=6, invalidEventNumber=0}, httpBodyAndStatus=HttpBodyAndStatus{statusCode=400, body={"text":"Invalid data format","code":6,"invalid-event-number":0}}, lifecycleType=EVENT_POST_NOT_OK, url=https://44.194.107.82:443, errorType=RECOVERABLE_DATA_ERROR, context=event_post}
Can anyone confirm it works with hec_endpoint_type
set to Event
?
If you're a Splunkcloud customer, once you've successfully deployed all the resources you'll need to ensure that your Splunkcloud instance has the Kinesis Data Firehose egress CIDRs allow listed under "Server Settings" > "IP Allow List Management" > "HEC access for ingestion"
For more details on the relevant CIDRs: https://docs.aws.amazon.com/firehose/latest/dev/controlling-access.html#using-iam-splunk-vpc
Currently var.region
variable only used in Kinesis Firehose IAM Role Trust Policy to enable logs from particular regions.
We would like to enable CW Subscription Filters from multiple regions to "push" to Kinesis in particular region.
Hi, I am trying to use this module to send cloudwatch logs to Splunk
Facing this error
│ Error: error creating S3 bucket ACL for doe-cdi-tf-splunk-bluebird: AccessControlListNotSupported: The bucket does not allow ACLs
│ status code: 400, request id: PQWWZAJS8Z77QR4N, host id: BkjYWmWoI1Aucg76Az0+GCVGRoAWb15VNSr1NSxL8z/QAbqpNt6IMjLQpZLfuBCr4FJ7Rf5EEg4=
│
│ with module.kinesis_firehose.aws_s3_bucket_acl.kinesis_firehose_s3_bucket,
│ on .terraform/modules/kinesis_firehose/main.tf line 95, in resource "aws_s3_bucket_acl" "kinesis_firehose_s3_bucket":
│ 95: resource "aws_s3_bucket_acl" "kinesis_firehose_s3_bucket" {
│
╵
╷
│ Error: error creating Lambda Function (1): AccessDeniedException:
│ status code: 403, request id: e4847f35-2362-4f23-bc4f-881b4fa684b5
│
│ with module.kinesis_firehose.aws_lambda_function.firehose_lambda_transform,
│ on .terraform/modules/kinesis_firehose/main.tf line 249, in resource "aws_lambda_function" "firehose_lambda_transform":
│ 249: resource "aws_lambda_function" "firehose_lambda_transform" {
│
The default value for the "nodejs_runtime" variable is deprecated.
Article here https://aws.amazon.com/en/blogs/developer/announcing-the-end-of-support-for-node-js-12-x-in-the-aws-sdk-for-javascript- v3/
It is recommended to update to the latest current version.
I am working within a module that requires me to conditionally send logs to splunk depending on the environment. When attempting to use the count variable, I get the following:
╷
│ Error: Module module.kinesis_firehose_splunk contains provider configuration
│
│ Providers cannot be configured within modules using count, for_each or
│ depends_on.
╵
Related module declaration:
module "kinesis_firehose" {
source = "disney/kinesis-firehose-splunk/aws"
version = "4.0.0"
count = var.splunk_enabled ? 1 : 0
...
}
Would it be possible to remove the provider configuration from this module?
Just ran into this today using an unpinned version of the module, when I would attempt to run the lambda I would get an error message like """[SyntaxError: Cannot use import statement outside a module]"""
I noted that the Nodejs runtime was 18.x on my working lambda and it was set to 20.x on the new lambda that got setup.
I googled a bit and found the solution was to add a package.json to the lambda with the following contents:
{
"type": "module"
}
That seemed to get things working. I'll look at module source code and see if I can come up with a patch..
Allow the HEC token to be passed as a plaintext or as a Parameter Store.
AWS displays the HEC token in the Kinesis Firehose settings in a very simple and clear way. It is recommended to remove the requirement: create a KMS key, encrypt the text with it, and only then pass it as an input parameter.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.