Giter Site home page Giter Site logo

awslabs / clickstream-analytics-on-aws Goto Github PK

View Code? Open in Web Editor NEW
56.0 5.0 18.0 66.92 MB

Build clickstream analytics on AWS for your mobile and web applications

Home Page: https://aws.amazon.com/solutions/implementations/clickstream-analytics-on-aws/

License: Apache License 2.0

JavaScript 1.52% Shell 0.46% TypeScript 82.92% HTML 0.02% SCSS 0.32% PLpgSQL 2.15% Dockerfile 0.06% Java 12.55%
aws aws-amplify aws-cdk aws-emr-serverless aws-kinesis-stream aws-msk aws-quicksight aws-redshift clickstream data-analysis

clickstream-analytics-on-aws's Introduction

Clickstream Analytics on AWS

An end-to-end solution to collect, ingest, analyze, and visualize clickstream data inside your web and mobile applications.

Solution Overview

This solution collects, ingests, analyzes, and visualizes clickstream events from your websites and mobile applications. Clickstream data is critical for online business analytics use cases, such as user behavior analysis, customer data platform, and marketing analysis. This data derives insights into the patterns of user interactions on a website or application, helping businesses understand user navigation, preferences, and engagement levels to drive product innovation and optimize marketing investments.

With this solution, you can quickly configure and deploy a data pipeline that fits your business and technical needs. It provides purpose-built software development kits (SDKs) that automatically collect common events and easy-to-use APIs to report custom events, enabling you to easily send your customers’ clickstream data to the data pipeline in your AWS account. The solution also offers pre-assembled dashboards that visualize key metrics about user lifecycle, including acquisition, engagement, activity, and retention, and adds visibility into user devices and geographies. You can combine user behavior data with business backend data to create a comprehensive data platform and generate insights that drive business growth.

Architecture Overview

architecture diagram

  1. Amazon CloudFront distributes the frontend web UI assets hosted in the Amazon S3 bucket, and the backend APIs hosted with Amazon API Gateway and AWS Lambda.
  2. The Amazon Cognito user pool or OpenID Connect (OIDC) is used for authentication.
  3. The web UI console uses Amazon DynamoDB to store persistent data.
  4. AWS Step Functions, AWS CloudFormation, AWS Lambda, and Amazon EventBridge are used for orchestrating the lifecycle management of data pipelines.
  5. The data pipeline is provisioned in the Region specified by the system operator. It consists of Application Load Balancer (ALB), Amazon ECS, Amazon Managed Streaming for Kafka (Amazon MSK), Amazon Kinesis Data Streams, Amazon S3, Amazon EMR Serverless, Amazon Redshift, and Amazon QuickSight.

For more information, refer to the doc.

SDKs

Clickstream Analytics on AWS provides different client-side SDKs, which can make it easier for you to report events to the data pipeline created in the solution. Currently, the solution supports the following platforms:

See this repo for different kinds of SDK samples.

Deployment

Using AWS CloudFormation template

Follow the implementation guide to deploy the solution using AWS CloudFormation template.

Using AWS CDK

Preparations

  • Make sure you have an AWS account
  • Configure credential of aws cli
  • Install Node.js LTS version 18.17.0 or later
  • Install Docker Engine
  • Install pnpm npm install -g [email protected]
  • Install the dependencies of the solution by executing the command pnpm install && pnpm projen && pnpm nx build @aws/clickstream-base-lib
  • Initialize the CDK toolkit stack into AWS environment (only for deploying via AWS CDK for the first time), and run npx cdk bootstrap

Deploy the web console

# deploy the web console of the solution
npx cdk deploy cloudfront-s3-control-plane-stack-global --parameters Email=<your email> --require-approval never

Deploy pipeline stacks

# deploy the ingestion server with s3 sink
# 1. check stack name in src/main.ts for other stacks
# 2. check the stack for required CloudFormation parameters
npx cdk deploy ingestion-server-s3-stack --parameters ...

Deploy local code for updating existing stacks created by the web console

# update the existing data modeling Redshift stack Clickstream-DataModelingRedshift-xxx
bash e2e-deploy.sh -n modelRedshiftStackName -s Clickstream-DataModelingRedshift-xxx
# update the existing web console
bash e2e-deploy.sh -n standardControlPlaneStackName -s <stack name of existing web console>

See this file for complete stack names.

Test

pnpm test

Local development for web console

  • Step1: Deploy the solution control plane(create DynamoDB tables, State Machine and other resources).
  • Step2: Open Amazon Cognito console, select the corresponding User pool, click the App integration tab, select application details in the App client list, edit Hosted UI, and set a new URL: http://localhost:3000/signin into Allowed callback URLs.
  • Step3: Goto the folder: src/control-plane/local
cd src/control-plane/local
# run backend server local
bash start.sh -s backend
# run frontend server local
bash start.sh -s frontend

Security

See CONTRIBUTING for more information.

License

This project is licensed under the Apache-2.0 License.

File Structure

Upon successfully cloning the repository into your local development environment but prior to running the initialization script, you will see the following file structure in your editor:

├── CHANGELOG.md                       [Change log file]
├── CODE_OF_CONDUCT.md                 [Code of conduct file]
├── CONTRIBUTING.md                    [Contribution guide]
├── LICENSE                            [LICENSE for this solution]
├── NOTICE.txt                         [Notice for 3rd-party libraries]
├── README.md                          [Read me file]
├── buildspec.yml
├── cdk.json
├── codescan-prebuild-custom.sh
├── deployment                         [shell scripts for packaging distribution assets]
│   ├── build-open-source-dist.sh
│   ├── build-s3-dist-1.sh
│   ├── build-s3-dist.sh
│   ├── cdk-solution-helper
│   ├── post-build-1
│   ├── run-all-test.sh
│   ├── solution_config
│   ├── test
│   ├── test-build-dist.sh
│   └── test-deploy-tag-images.sh
├── docs                               [document]
│   ├── en
│   ├── index.html
│   ├── mkdocs.base.yml
│   ├── mkdocs.en.yml
│   ├── mkdocs.zh.yml
│   ├── site
│   ├── test-deploy-mkdocs.sh
│   └── zh
├── examples                           [example code]
│   ├── custom-plugins
│   └── standalone-data-generator
├── frontend                           [frontend source code]
│   ├── README.md
│   ├── build
│   ├── config
│   ├── esbuild.ts
│   ├── node_modules
│   ├── package.json
│   ├── public
│   ├── scripts
│   ├── src
│   ├── tsconfig.json
├── package.json
├── sonar-project.properties
├── src                                [all backend source code]
│   ├── alb-control-plane-stack.ts
│   ├── analytics
│   ├── base-lib
│   ├── cloudfront-control-plane-stack.ts
│   ├── common
│   ├── control-plane
│   ├── data-analytics-redshift-stack.ts
│   ├── data-modeling-athena-stack.ts
│   ├── data-pipeline
│   ├── data-pipeline-stack.ts
│   ├── data-reporting-quicksight-stack.ts
│   ├── ingestion-server
│   ├── ingestion-server-stack.ts
│   ├── kafka-s3-connector-stack.ts
│   ├── main.ts
│   ├── metrics
│   ├── metrics-stack.ts
│   └── reporting
├── test                               [test code]
│   ├── analytics
│   ├── common
│   ├── constants.ts
│   ├── control-plane
│   ├── data-pipeline
│   ├── ingestion-server
│   ├── jestEnv.js
│   ├── metrics
│   ├── reporting
│   ├── rules.ts
│   └── utils.ts
├── tsconfig.dev.json
├── tsconfig.json

clickstream-analytics-on-aws's People

Contributors

am29d avatar amliuyong avatar chenhaiyun avatar dengmingtong avatar dependabot[bot] avatar jingnanl avatar llmin avatar luorobin-a2z avatar qiaow02 avatar rrxie avatar techeditor avatar tyyzqmf avatar yanbasic avatar zhu-xiaowei avatar zxkane avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

clickstream-analytics-on-aws's Issues

custom plugin's description display error

Summary

custom plugin's description display error

Steps to reproduce

What is the current bug behavior?

What is the expected correct behavior?

Relevant logs and/or screenshots

截屏2023-07-18 16 14 54

Possible fixes


This is 🐛 Bug Report

fail to create reporting stack

Summary

Fail to create reporting stack

Steps to reproduce

  1. create project then configure the pipeline
  2. enable redshift and reporting
  3. the pipeline creation failed due to the reporting stack fails

What is the current bug behavior?

What is the expected correct behavior?

Relevant logs and/or screenshots

fail to create data source with below error

ClickstreamDataSource | CREATE_FAILED | Resource handler returned message: "The authentication type 13 is not supported. Check that you have configured the pg_hba.conf file to include the client's IP address or subnet, and that it is using an authentication scheme supported by the driver." (RequestToken: df91a3a9-eb5d-7a04-fc9f-71a1e0eca546, HandlerErrorCode: null)

Possible fixes


This is 🐛 Bug Report

Create DataProcessing failed

Summary

Error Info:
Received response status [FAILED] from custom resource. Message returned: Cannot read properties of undefined (reading X86_64)

Version:
(Version v1.1.0)(Build dev-main-202308070335-92fdfa0)

Stack:
Clickstream-DataProcessing

Steps to reproduce

What is the current bug behavior?

What is the expected correct behavior?

Relevant logs and/or screenshots

Possible fixes


This is 🐛 Bug Report

Report cannot be filtered by date for Device tab.

Summary

The report of quicksight cannot be filtered by date in Device tab.

Steps to reproduce

  1. Open the quicksight dashboard.
  2. Choose "Device" tab.
  3. Choose date range for any you want, but the below charts cannot be refreshed.

What is the current bug behavior?

Dashboard cannot be filtered by date range.

What is the expected correct behavior?

Dashboard should be filtered by date range.

Relevant logs and/or screenshots

Device tab:
Screenshot 2023-07-19 at 11 09 13

Other tab:

Screenshot 2023-07-19 at 11 09 23

Possible fixes


This is 🐛 Bug Report

Remove unnecessary log in Load ODSEvent To Redshift Workflow

Summary

Remove unnecessary in Load ODSEvent To RedshiftWorkflow

Steps to reproduce

What is the current bug behavior?

all query results in DDB are printed to cloudwatch logs

What is the expected correct behavior?

only necessary log should be printed

Relevant logs and/or screenshots

Possible fixes


This is 🐛 Bug Report

incorrect created time when viewing the existing application

Summary

The prompt dialog is empty, see below screenshot.

Steps to reproduce

  1. create a project with pipeline
  2. add an application to the project
  3. go back to projects home
  4. choose the project then open the application page

What is the current bug behavior?

Check the Created Time that is the current time.

What is the expected correct behavior?

Relevant logs and/or screenshots

image

Possible fixes


This is 🐛 Bug Report

improve the UX when session is timeout

Describe the feature

Using better UX to notice the user the session timeout or keeping the session renew.

Below screenshot is current behavior to leave the web console hours then returning to access the tab,

image

Use Case

Proposed Solution - Optional

If the API failed to access the endpoint with auth error, we can notice the customers then provide the login option.

Another graceful way is using the refresh token to renew the access token.

Other Information

Notes:

  • this feature has an UI update
  • this feature contains an implementation guide update

This is a 🚀 Feature Request

Needs to auto refresh the Add Application button status when pipeline status is in active

Summary

when the pipeline is active, the Add Application button still in disable status, user needs to click refresh button manually to add a app.

Steps to reproduce

create a pipeline and wait it to be active status

What is the current bug behavior?

Add Application button in disable status when pipeline status is active.

What is the expected correct behavior?

Needs to auto refresh the Add Application button status when pipeline status is in active

Relevant logs and/or screenshots

截屏2023-07-18 22 39 08

Possible fixes


This is 🐛 Bug Report

check incorrect endpoint configurations

Describe the feature

Validate the configuration of existing VPC endpoints for private subnets even it has route to the NAT gateway / instance.

Use Case

Sometimes the pipeline is deployed in the subnets with NAT gateway / instances. However, the VPC endpoints might be configured for the subnets, it might cause the connectivity issues due to the endpoints's configuration.

Proposed Solution - Optional

Other Information

related issues #87

Notes:

  • this feature has an UI update
  • this feature contains an implementation guide update

This is a 🚀 Feature Request

fail to update the existing data modeling redshift stack if the schema has views/tables created by other users

Summary

The solution creates a BI user in QuickSight, then will grant the read-only permission to the users to read the necessary tables and views for dashboard visualization.

In some cases the upating fails due to permission error.

Steps to reproduce

  1. create a project with configuring a data pipeline with Redshift as data modeling
  2. maually create some view for other analysis purpose in thte schema for the project and application
  3. updatet the pipeline to newer version with some schema changes or new views added

What is the current bug behavior?

The data modeling - Redshift stack failed to be updated with below error,

Query #3 failed with ERROR: permission denied for relation item_view

What is the expected correct behavior?

Relevant logs and/or screenshots

Possible fixes

When creating/updating the tables/views of Redshift for the project, we do grant the readonly permission to the BI user.

GRANT USAGE ON SCHEMA {{schema}} TO {{user_bi}};
GRANT SELECT ON ALL TABLES IN SCHEMA {{schema}} TO {{user_bi}};
ALTER DEFAULT PRIVILEGES IN SCHEMA {{schema}} GRANT SELECT ON TABLES TO {{user_bi}};

Won't grant the entire usage/views/tables of schema to bi user. Use allow-list to grant the tables/views for BI user.


This is 🐛 Bug Report

Control plane doesn't show "Upgrade" button for pipeline

Summary

After upgrading the Clickstream template via CloudFormation, control plane website doesn't show "Upgrade" button for a pipeline which was created in an older version.

Steps to reproduce

  1. Create a pipeline in the version v1.0.2-202308290112-d1be419e
  2. Upgrade Clickstream using template via CloudFormation
  3. In the control plane website, confirm the new version:Version: v1.0.2-202308291429-59fbd0c8.

What is the current bug behavior?

Check the pipeline created in step 1, there is no "Upgrade" button.

What is the expected correct behavior?

There should be a "Upgrade" button since the pipeline version and solution version are different.

Relevant logs and/or screenshots

Screenshot 2023-08-30 at 09 53 19

Possible fixes


This is 🐛 Bug Report

should not configure pipeline without custom transformer when using third-party sdk

Summary

as title

Steps to reproduce

  1. create a project then configure pipeline
  2. choose using third-party sdk
  3. configure the data processing module. Note: there is no transformer plugin created
  4. I can complete the pipeline configuration then created it

What is the current bug behavior?

What is the expected correct behavior?

Prevent the pipeline configuration if using third-party sdk without custom transformer

Relevant logs and/or screenshots

Possible fixes


This is 🐛 Bug Report

bump ingetion components to latest/stable releases

Summary

Let's bump the ingestion components to latest OSS releases.

What is the expected correct behavior?

  • latest stable Nginx release
  • vector v0.31.0 with KDS enhancement

This is a 🚀 Feature Request

Delete App failed with error "The current pipeline status does not allow update"

Summary

create a app and upload past 30 days events, and current pipeline status is Active in control plane and cloudformation, when click delete button it shows the error: "The current pipeline status does not allow update"

Steps to reproduce

  1. create a pipeline
  2. create a app
  3. upload last 30 days events using standalone-data-generator python script
  4. select the app, and click delete button
  5. the error shows

What is the current bug behavior?

Can't delete the app which have already have some events, but the app which have no events, can be deleted expected.

What is the expected correct behavior?

The app can be delete successfully

Relevant logs and/or screenshots

截屏2023-07-19 09 10 56 image

Possible fixes


This is 🐛 Bug Report

force to use IMDSv2 only in EC2 managed by ECS for ingestion server

Describe the feature

Instance Metadata v2 improves the security posture of the instance by adding session authentication to every HTTP request. Teams that do not disable IMDSv1 will miss out on the mandated protection provided by session authentication of IMDSv2.

Blog: https://aws.amazon.com/blogs/security/defense-in-depth-open-firewalls-reverse-proxies-ssrf-vulnerabilities-ec2-instance-metadata-service

Use Case

Proposed Solution - Optional

Other Information

Notes:

  • this feature has an UI update
  • this feature contains an implementation guide update

This is a 🚀 Feature Request

Cannot update ingestion server endpoint path

Summary

I tried to update ingestion server endpoint path from /collect to /g/collect, the cloud-formation stack was updated successfully, but the rule on ALB still /collect, the ECS definition was updated correctly.

Steps to reproduce

What is the current bug behavior?

What is the expected correct behavior?

Relevant logs and/or screenshots

Possible fixes


This is 🐛 Bug Report

pipeline upgrade fail due to EMR in STARTED status

Summary

when upgrade pipeline from 1.0.0 to 1.0.1 , data process stack fails.

Steps to reproduce

  1. update control plane
  2. select target pipeline to upgrade
  3. click 'upgrade' button

What is the current bug behavior?

stack can upgrade successfully

What is the expected correct behavior?

Relevant logs and/or screenshots

Possible fixes


This is 🐛 Bug Report

Can't remove all resources when one of stack deletion timeout.

Summary

when one of stack delete timeout, following delete steps will not be trigged. The result is that many resources are not being deleted.

Steps to reproduce

What is the current bug behavior?

What is the expected correct behavior?

Relevant logs and/or screenshots

Possible fixes


This is 🐛 Bug Report

optimize the backend build

Describe the feature

  • use the built aws-lambda-web-adapter to speed up the build
  • move the backend build Dockerfile in the src/control-plane

Use Case

Proposed Solution - Optional

Other Information

Notes:

  • this feature has an UI update
  • this feature contains an implementation guide update

Web console will auto refresh about every 3 minutes in BJS region

❓ General Issue

The Question

When create pipeline if we stay the edit page more than 3 minutes the page will auto refresh, and all the form content will be cleared.

Other information

The web console is created using a custom domain and OIDC(keycloak) in the BJS region.
All pages are automatically refreshed for about 3 minutes.
Both Chrome and Firefox have this issue.
This issue exists with versions 1.0.0 and 1.0.2 of the web console.

Edit pipeline allow change DataProcessingCronOrRateExpression

Summary

Edit pipeline allow change DataProcessingCronOrRateExpression

Steps to reproduce

  1. edit pipeline
  2. step2 edit data processing cron expression
  3. submit changes and report an error

What is the current bug behavior?

Control plane show error Property modification not allowed: DataProcessingCronOrRateExpression when edit pipeline.

What is the expected correct behavior?

Edit pipeline success.

Relevant logs and/or screenshots

Possible fixes

Add the key DataProcessingCronOrRateExpression to allow list.


This is 🐛 Bug Report

"View Details" button for an APP shows an empty page

Summary

"View Details" button for an APP does not work.

Steps to reproduce

  1. install control plane
  2. create a pipeline
  3. add an app
  4. select the app, click the "View Details" button

What is the current bug behavior?

What is the expected correct behavior?

Relevant logs and/or screenshots

Possible fixes


This is 🐛 Bug Report

Data processing interval validation are not consistent in frontend and backend

Summary

In the control plane website, Data processing interval validation are not consistent in frontend and backend

Steps to reproduce

  1. Create a pipeline in control plane website, in the Step 3 Data processing & modeling, choose Fixed rate 1 minutes for "Data processing interval"
  2. Click "Next" button, frontend will validate the input and show the error message "Data processing interval could not be less than 3 minutes"
  3. Modify the Fixed rate to 3 minutes for "Data processing interval", continue to configure the pipeline
  4. In the last step, submit the request, it would fail due to "Validation error: the minimum interval of data processing is 6 minutes."
  5. Modify the Data processing interval to 6 minutes, submit the request again, it succeeded.

What is the current bug behavior?

Data processing interval validation are inconsistent in frontend and backend

What is the expected correct behavior?

Data processing interval validation should be consistent in frontend and backend, either 3 minutes or 6 minutes

Relevant logs and/or screenshots

"Data processing interval could not be less than 3 minutes"
"Validation error: the minimum interval of data processing is 6 minutes."

Possible fixes


This is 🐛 Bug Report

Pipeline creation fails in DataProcessing

Summary

Pipeline creation fails in DataProcessing stack creation

Steps to reproduce

Followed exact same steps as per workshop, created project from webconsole, used kinesis as sink, after putting all config and on creation of pilpeine, data processing stack fails.

What is the current bug behavior?

Pipeline creation fails

What is the expected correct behavior?

Pipeline should get created

Relevant logs and/or screenshots

Received response status [FAILED] from custom resource. Message returned: Socket timed out without establishing a connection within 5000 ms Logs: /aws/lambda/Clickstream-DataProcessin-GlueTablePartitionSyncer-LHI331oLoIf1 at Timeout._onTimeout (/var/task/index.js:13076:30) at listOnTimeout (node:internal/timers:559:17) at processTimers (node:internal/timers:502:7) (RequestId: bd2a66f8-2797-41d4-8323-fb5c0c0b0ea6)

Possible fixes

Not Sure


This is 🐛 Bug Report

Need to parse web SDK host_name parameter

Summary

Need to parse web SDK host_name parameter:

[{
	"hashCode": "aa46dc0d",
	"event_type": "_user_login",
	"event_id": "a1d34a1e-1aa7-4827-8028-50a96c43abb3",
	"device_id": "928d5a81-cbd2-4da9-9c57-6fed6109b43f",
	"unique_id": "91ff00df-e96c-4a02-b401-90514c229e16",
	"app_id": "reactApp",
	"timestamp": 1689212732143,
	> "host_name": "example.com",
	"locale": "zh-CN",
	"system_language": "zh",
	"country_code": "CN",
	"zone_offset": 28800000,
	"make": "Gecko",
	"platform": "Web",
	"screen_height": 1304,
	"screen_width": 932,
	"sdk_name": "aws-solution-clickstream-sdk",
	"sdk_version": "",
	"user": {
		"_user_first_touch_timestamp": {
			"value": 1689044839904,
			"set_timestamp": 1689044839904
		}
	},
	"attributes": {
		"userName": "carl",
		"userAge": 20
	}
}]

What is the expected correct behavior?

Parse and show the host_name value in Redshift


This is a 🚀 Feature Request

bump to node 18

Summary

Node 16 will be no longer maintained since 2023/10, let us bump the node used in project to latest Node 18.

What is the expected correct behavior?

Consider below modules,

  • frontend of web console
  • main project for cloud infra
  • api project of web console

This is a 🚀 Feature Request

fail to retry the failed pipeline creation

Summary

When retrying a failure pipeline creation, the retry deployment did not start.

Steps to reproduce

  1. create a project with pipeline configuration
  2. the pipeline creation failed due to some service quota or conflict
  3. click the retry button in the pipeline detail page

What is the current bug behavior?

The underlying CloudFormation stack was not retried or updated.

What is the expected correct behavior?

Relevant logs and/or screenshots

See below error in the output of step function workflow,

"Cause": "{"errorType":"Error","errorMessage":"This stack is currently in a non-terminal [CREATE_FAILED] state. To update the stack from this state, please use the disable-rollback parameter with update-stack API. To rollback to the last known good state, use the rollback-stack API","trace":["Error: This stack is currently in a non-terminal [CREATE_FAILED] state. To update the stack from this state, please use the disable-rollback parameter with update-stack API. To rollback to the last known good state, use the rollback-stack API"," at updateStack (/var/task/index.js:2362:11)"," at process.processTicksAndRejections (node:internal/process/task_queues:95:5)"]}",

Possible fixes


This is 🐛 Bug Report

Domain Name regex validation

We are trying to launch the stack with a custom domain, and we are using example.services. However, the existing regex validation for the TLD (Top-Level Domain) part of the domain name seems to only allow between 2 and 6 characters. This is problematic for us because we are using a TLD with more than 6 characters (.services).

Pipeline update and rollback failed with DataProcessing error in BJS region

Summary

Pipeline update and rollback failed with DataProcessing error, and click Retry button also doesn't work.

Environment:
BJS Region + OIDC(keycloak) + (S3, Athena only)

Steps to reproduce

Create pipeline with above environment in v1.0.0 then update the control plane to v1.0.2 and update the pipeline, the error will occur.

What is the current bug behavior?

Pipeline update failed and retry also failed.

What is the expected correct behavior?

Pipeline can be update from v1.0.0 to v1.0.2

Relevant logs and/or screenshots

截屏2023-08-30 19 21 32 截屏2023-08-30 19 19 14 截屏2023-08-30 19 21 06

Possible fixes

This is 🐛 Bug Report

Control plane View Details button show empty page

Summary

Control plane View Details button show empty page

Steps to reproduce

  1. deploy control plane in us-east-1
  2. add an App
  3. select the app, and click "View Details"

What is the current bug behavior?

What is the expected correct behavior?

Relevant logs and/or screenshots

Possible fixes


This is 🐛 Bug Report

updating the existing pipeline CORS does not work

Summary

Modify pipeline CORS settings to * is not work.

Steps to reproduce

  1. modify a pipeline which is not set the CORS, then leave the '*' and update pipeline
  2. wait the pipeline status to be active
  3. send the request by endpoint
  4. the CORS error will appear with no Access-Control-Allow-Origin header

What is the current bug behavior?

截屏2023-07-13 09 38 38

What is the expected correct behavior?

Send the request success

Relevant logs and/or screenshots

Possible fixes

Create a new pipeline with CORS * is works, it seems to update the pipeline issue.


This is 🐛 Bug Report

Lack of authorization to update pipeline

Summary

when update pipeline, data modeling stack failed
image

Steps to reproduce

What is the current bug behavior?

What is the expected correct behavior?

Relevant logs and/or screenshots

Possible fixes


This is 🐛 Bug Report

miss the validation for the custom domain

Summary

In ingestion setting, the domain name parameter is not validated, and this field cannot be modified.

Steps to reproduce

  • Create a project.
  • Configure pipeline.
  • Enable HTTPS in ingestion endpoint settings.
  • Input any value in domain name and submit.

What is the current bug behavior?

the domain name parameter is not validated, and this field cannot be modified.

What is the expected correct behavior?

the domain name parameter is validated, or this field can be modified.

Relevant logs and/or screenshots

image

Possible fixes


This is 🐛 Bug Report

miss clear description of built-in transform plugin

Summary

miss clear description of built-in transform plugin

image

Steps to reproduce

  1. login web console
  2. click the Plugins in left sidebar

What is the current bug behavior?

What is the expected correct behavior?

Relevant logs and/or screenshots

Possible fixes


This is 🐛 Bug Report

examples/standalone-data-generator does not honor other options in configuration file

Summary

Use examples/standalone-data-generator to generate the test events, but it does not honor other options in configuration file, such as isCompressEvents.

Steps to reproduce

  1. download the amplifyconfiguration.json from web console for my app
  2. use the example standalone-data-generator to generate the test events

What is the current bug behavior?

The example file does not generate the test events without compression.

What is the expected correct behavior?

Relevant logs and/or screenshots

Possible fixes


This is 🐛 Bug Report

New created ECS cluster uses old ECS task defination

Summary

I created a new pipeline from control-plan, checkout the ECS cluster service, it used old task definition created before

image

Steps to reproduce

What is the current bug behavior?

What is the expected correct behavior?

Relevant logs and/or screenshots

Possible fixes


This is 🐛 Bug Report

No stack of Athena when disable Redshift

Summary

Can not create Athena stack when disable Redshift.

Steps to reproduce

What is the current bug behavior?

What is the expected correct behavior?

Relevant logs and/or screenshots

Possible fixes


This is 🐛 Bug Report

Wrong pattern for Parameter ServerCorsOrigin

Summary

Wrong pattern for Parameter ServerCorsOrigin

I want to set ServerCorsOrigin to http://xxx.cloudfront.net

get cloudformation error

Parameter ServerCorsOrigin failed to satisfy constraint: ServerCorsOrigin must match pattern ^$|\*$|^([a-z0-9A-Z#$&@_%~\*\.\-]+\.[a-zA-Z0-9]{2,6}(,\s*[a-z0-9A-Z#$&@_%~\*\.\-]+\.[a-zA-Z0-9]{2,6})*)$

Steps to reproduce

What is the current bug behavior?

What is the expected correct behavior?

Relevant logs and/or screenshots

Possible fixes


This is 🐛 Bug Report

validation on db user name for Redshift cluster

Describe the feature

we need improve the sanity check in both frontend and backend of web console.

Use Case

Proposed Solution - Optional

Other Information

Notes:

  • this feature has an UI update
  • this feature contains an implementation guide update

This is a 🚀 Feature Request

mitigate the warnings when synthezing cdk app

Summary

See below warnings when synthezing the application,

[Warning at /public-exist-vpc-control-plane-stack/PortalVPC] fromVpcAttributes: 'availabilityZones' is a list token: the imported VPC will not work with constructs that require a list of subnets at synthesis time. Use 'Vpc.fromLookup()' or 'Fn.importListValue' instead.
[Warning at /public-exist-vpc-control-plane-stack/PortalVPC] fromVpcAttributes: 'publicSubnetIds' is a list token: the imported VPC will not work with constructs that require a list of subnets at synthesis time. Use 'Vpc.fromLookup()' or 'Fn.importListValue' instead.
[Warning at /public-exist-vpc-control-plane-stack/PortalVPC] fromVpcAttributes: 'privateSubnetIds' is a list token: the imported VPC will not work with constructs that require a list of subnets at synthesis time. Use 'Vpc.fromLookup()' or 'Fn.importListValue' instead.
[Warning at /public-exist-vpc-control-plane-stack/PortalVPC/PublicSubnet1] No routeTableId was provided to the subnet at 'public-exist-vpc-control-plane-stack/PortalVPC/PublicSubnet1'. Attempting to read its .routeTable.routeTableId will return null/undefined. (More info: https://github.com/aws/aws-cdk/pull/3171)
[Warning at /public-exist-vpc-control-plane-stack/PortalVPC/PrivateSubnet1] No routeTableId was provided to the subnet at 'public-exist-vpc-control-plane-stack/PortalVPC/PrivateSubnet1'. Attempting to read its .routeTable.routeTableId will return null/undefined. (More info: https://github.com/aws/aws-cdk/pull/3171)
[Warning at /public-exist-vpc-custom-domain-control-plane-stack/PortalVPC] fromVpcAttributes: 'availabilityZones' is a list token: the imported VPC will not work with constructs that require a list of subnets at synthesis time. Use 'Vpc.fromLookup()' or 'Fn.importListValue' instead.
[Warning at /public-exist-vpc-custom-domain-control-plane-stack/PortalVPC] fromVpcAttributes: 'publicSubnetIds' is a list token: the imported VPC will not work with constructs that require a list of subnets at synthesis time. Use 'Vpc.fromLookup()' or 'Fn.importListValue' instead.
[Warning at /public-exist-vpc-custom-domain-control-plane-stack/PortalVPC] fromVpcAttributes: 'privateSubnetIds' is a list token: the imported VPC will not work with constructs that require a list of subnets at synthesis time. Use 'Vpc.fromLookup()' or 'Fn.importListValue' instead.
[Warning at /public-exist-vpc-custom-domain-control-plane-stack/PortalVPC/PublicSubnet1] No routeTableId was provided to the subnet at 'public-exist-vpc-custom-domain-control-plane-stack/PortalVPC/PublicSubnet1'. Attempting to read its .routeTable.routeTableId will return null/undefined. (More info: https://github.com/aws/aws-cdk/pull/3171)
[Warning at /public-exist-vpc-custom-domain-control-plane-stack/PortalVPC/PrivateSubnet1] No routeTableId was provided to the subnet at 'public-exist-vpc-custom-domain-control-plane-stack/PortalVPC/PrivateSubnet1'. Attempting to read its .routeTable.routeTableId will return null/undefined. (More info: https://github.com/aws/aws-cdk/pull/3171)

What is the expected correct behavior?

Use the CDK recommended API to mitigate those warnings.


This is a 🚀 Feature Request

Upgrade pipeline failed due to DataProcessing

Summary

when upgrade pipeline from 1.0.0 to v1.0.1 candidate , the DataProcessing model is failed with error:

Resource handler returned message: "Invalid request provided: Application 00fborfm8ne0ie09 must be in one of the statuses to be updated: [STOPPED, CREATED]. Current status: STARTED (Service: EmrServerless, Status Code: 400, Request ID: 90432f20-335c-4f32-bf98-ec425633dfc4)" (RequestToken: 44056af7-2186-8c1c-ebcc-c3cc8b546f92, HandlerErrorCode: InvalidRequest)

Steps to reproduce

  1. deploy a control plane using v1.0.0 template.
  2. create a pipeline then create a app.
  3. upload last 30 days events
  4. update control plane to v1.0.1 candidate in CloudFormation.
  5. click update pipeline button in control plane

What is the current bug behavior?

pipeline update failed.

What is the expected correct behavior?

pipeline can update to newer version successfully

Relevant logs and/or screenshots

截屏2023-07-19 11 30 21 截屏2023-07-19 11 31 16

Possible fixes


This is 🐛 Bug Report

fail to deploy web console in some regions with main branch code

Summary

Invalid CompatibleArchitectures for lambda layer

2023-07-18 00:41:53.180000+00:00 CREATE_FAILED AWS::Lambda::LayerVersion ClickStreamApiLambdaAdapterLayerX86C4A72260 Resource handler returned message: "CompatibleArchitectures are not supported in me-central-1. Please remove the CompatibleArchitectures value from your request and try again (Service: AWSLambdaInternal; Status Code: 400; Error Code: InvalidParameterValueException; Request ID: e68c64bf-e8d7-4a51-bdaf-eae496b46435; Proxy: null)" (RequestToken: db6ef0bc-1a8e-d53c-4366-f9a4030f128e, HandlerErrorCode: GeneralServiceException)

Steps to reproduce

  1. deploy to one of below regions,

me-central-1, eu-central-2, eu-central-1

  1. the deployment failed due to above error

What is the current bug behavior?

What is the expected correct behavior?

Relevant logs and/or screenshots

Possible fixes


This is 🐛 Bug Report

QuickSight disable in China region

Summary

Reporting error in China region

Steps to reproduce

  1. deploy solution in china region
  2. create pipeline
  3. step4 reporting

What is the current bug behavior?

Internal server error

What is the expected correct behavior?

Disable check reporting in China region

Relevant logs and/or screenshots

image

Possible fixes


This is 🐛 Bug Report

Additional Settings typo

Summary

Addtional Settings should be Additional Settings in Data ingestion module

Steps to reproduce

What is the current bug behavior?

Title text has a typo

What is the expected correct behavior?

Relevant logs and/or screenshots

image

Possible fixes


This is 🐛 Bug Report

Put CORS configuration outside of Additional settings

Describe the feature

Put CORS configuration outside of Additional settings

Use Case

CORS configuration can easily be ignored by hiding it among Additional settings, which will cause the web SDK endpoints to not work properly.

Proposed Solution - Optional

Put CORS configuration outside of Additional settings

Other Information

When placed outside, it is disabled by default, the user can click the toggle button to turn it on, and the configuration text area will appear.

Notes:

  • this feature has an UI update
  • this feature contains an implementation guide update

This is a 🚀 Feature Request

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.