Giter Site home page Giter Site logo

turbot / powerpipe Goto Github PK

View Code? Open in Web Editor NEW
107.0 11.0 7.0 19.72 MB

Powerpipe: Dashboards for DevOps. Visualize cloud configurations. Assess security posture against a massive library of benchmarks. Build custom dashboards with code.

Home Page: https://powerpipe.io/

License: GNU Affero General Public License v3.0

Makefile 0.13% Go 32.64% CSS 0.76% JavaScript 5.35% HTML 1.07% TypeScript 51.94% Shell 3.83% SourcePawn 1.17% Pascal 0.93% Puppet 2.18%
aws kubernetes security azure cis cloud devops devsecops gcp postgresql

powerpipe's Introduction

mods   slack   maintained by

Powerpipe is dashboards and benchmarks as code. Use it to visualize any data source, and run compliance benchmarks and controls, for effective decision-making and ongoing compliance monitoring.

Benchmarks - 5,000+ open-source controls from CIS, NIST, PCI, HIPAA, FedRamp and more. Run instantly on your machine or as part of your deployment pipeline.

Relationship Diagrams - The only dashboarding tool designed from the ground up to visualize DevOps data. Explore your cloud, understand relationships, drill down to the details.

Dashboards & Reports - High-level dashboards provide a quick overview. Use them to highlight misconfigurations and hotspots. Filter, pivot, and snapshot results.

Code, not clicks - Our dashboards are code: version-controlled, composable, shareable, easy to edit — designed for the way you work. Join our open-source community!

Demo time!

Watch on YouTube →

Powerpipe demo

Install Powerpipe

The downloads page shows you how, but tl;dr:

Linux or WSL

sudo /bin/sh -c "$(curl -fsSL https://powerpipe.io/install/powerpipe.sh)"

MacOS

brew tap turbot/tap
brew install powerpipe

Dashboards for DevOps

See our documentation for examples of how to use Powerpipe to visualize cloud infrastructure and run security and compliance benchmarks. These examples use mods written for Steampipe and its plugin ecosystem.

Note, though, that Powerpipe is database-agnostic. We also provide samples for dashboards that use other data sources via Postgres, SQLite, DuckDB, and MySQL.

Open source & contributing

This repository is published under the AGPL 3.0 license. Please see our code of conduct. Contributors must sign our Contributor License Agreement as part of their first pull request. We look forward to collaborating with you!

Powerpipe is a product produced from this open source software, exclusively by Turbot HQ, Inc. It is distributed under our commercial terms. Others are allowed to make their own distribution of the software, but cannot use any of the Turbot trademarks, cloud services, etc. You can learn more in our Open Source FAQ.

Get involved

Join #powerpipe on Slack →

Want to help but don't know where to start? Pick up one of the help wanted issues:

powerpipe's People

Contributors

binaek avatar bob-bot avatar chandru89new avatar dependabot[bot] avatar e-gineer avatar judell avatar kaidaguerre avatar michaelburgess avatar omerosaienni avatar pskrbasu avatar sbldevnet avatar vhadianto avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

powerpipe's Issues

Final Release Checklist

  • Private --> Public repo
  • Update installer to remove API code (after repo goes public)
  • Update installer to use sh shebang instead of bash (after repo goes public)
  • Update Dockerfile to remove the GITHUB_TOKEN parts (after repo goes public)
  • Release workflow including homebrew/github registry publishing
  • Add the powerpipe specific script in homebrew-tap for versioning
  • Remove existing pre-releases
  • Should we pack the README and CHANGELOG in the released tar.gz like Flowpipe? Or just the binary like Steampipe?
  • Update pipe fittings with powerpipe specific stuff:
  • Merge local pp-workspace-sample pipe-fittings branch to get the sample workspaces file
  • Make the corresponding powerpipe changes after merging the pipe-fittings branch
  • [ ]

Add ability to have more detailed or customizable x-axis on time series charts

Is your feature request related to a problem? Please describe.
When using time series data in a graph on the x-axis, the x-axis values are automatically formatted which is helpful, but usually only the years are visible and users have to hover over each line/point/etc. to look at the specific month

Describe the solution you'd like
Add the ability to have each month display on the x-axis

Describe alternatives you've considered
Creating the x-axis values manually

Additional context
Add any other context or screenshots about the feature request here.

Points to discuss

  • Should we pack the README and CHANGELOG in the released tar.gz like Flowpipe? Or just the binary like Steampipe?
  • [ ]

present common dimensions in a standard order

When I add connection_name as a common dimension, I can get different orderings of common dimensions across controls. Ideally it would always be the same, and would be the order in which I named them in spvars.

image

Setting a variable by using the mod's namespace does't set the variable

Describe the bug
In the AWS Well-Architected mod, if I set a variable like aws_well_architected.common_dimensions = ["account_id"], this does not actually set the variable. However, if I set it like common_dimensions = ["account_id"], it does.

Steampipe version (steampipe -v)
v0.19.4

To reproduce
See above

Expected behavior
The variable should be set

Additional context
Add any other context about the problem here.

Unable to connect to a MYSQL database

Describe the bug

 powerpipe-mod-classic-car-models git:(classic_cars) ✗ powerpipe server --database mysql://root:admin@123@localhost:3306/classicmodels 

Error: unable to connect to backend: could not connect to duckdb backend: default addr for network 'localhost:3306' unknown
➜  powerpipe-mod-classic-car-models git:(classic_cars) ✗ powerpipe -v
Powerpipe v0.1.0-alpha.202401231827

Powerpipe version (powerpipe -v)
Example: v0.1.0

To reproduce
Steps to reproduce the behavior (please include relevant code and/or commands).

Expected behavior
A clear and concise description of what you expected to happen.

Additional context
Add any other context about the problem here.

Intermittent SQL deadlock eror when running benchmark

This sometimes occurs when running teampipe check benchmark.cis_v150 with no service running

It seems to be a deadlock between the refresh connection query and the benchmark query

Log:

2023-07-03 14:23:32.002 UTC [INFO]  hub: StartScan for table: aws_s3_bucket, cache enabled: false, iterator 0xc000163980, 0 quals (1688394211353)
2023-07-03 14:23:32.216 UTC [INFO]  hub: StartScan for table: aws_iam_policy, cache enabled: false, iterator 0xc000154fc0, 1 quals (1688394211576)
2023-07-03 14:23:32.295 UTC [35831] ERROR:  deadlock detected at character 268
2023-07-03 14:23:32.295 UTC [35831] DETAIL:  Process 35831 waits for AccessShareLock on relation 48689 of database 16384; blocked by process 35827.
	Process 35827 waits for AccessExclusiveLock on relation 48923 of database 16384; blocked by process 35831.
	Process 35831: select
	  -- Required Columns
	  u.arn as resource,
	  case
	    when count(k.*) > 1 then 'alarm'
	    else 'ok'
	  end as status,
	  u.name || ' has ' || count(k.*) || ' active access key(s).' as reason
	  -- Additional Dimensions
	  
	  
	from
	  aws_iam_user as u
	  left join aws_iam_access_key as k on u.name = k.user_name and u.account_id = k.account_id
	where
	  k.status = 'Active' or k.status is null
	group by
	  u.arn,
	  u.name,
	  u.account_id,
	  u.tags,
	  u._ctx;
	
	Process 35827: drop schema if exists "aws_001" cascade;
	create schema "aws_001";
	comment on schema "aws_001" is 'steampipe plugin: hub.steampipe.io/plugins/turbot/aws@latest';
	grant usage on schema "aws_001" to steampipe_users;
	alter default privileges in schema "aws_001" grant select on tables to steampipe_users;
	grant select on all tables in schema "aws_001" to steampipe_users;
	import foreign schema "hub.steampipe.io/plugins/turbot/aws@latest" from server steampipe into "aws_001";
	
2023-07-03 14:23:32.295 UTC [35831] HINT:  See server log for query details.
2023-07-03 14:23:32.295 UTC [35831] STATEMENT:  select
	  -- Required Columns
	  u.arn as resource,
	  case
	    when count(k.*) > 1 then 'alarm'
	    else 'ok'
	  end as status,
	  u.name || ' has ' || count(k.*) || ' active access key(s).' as reason
	  -- Additional Dimensions
	  
	  
	from
	  aws_iam_user as u
	  left join aws_iam_access_key as k on u.name = k.user_name and u.account_id = k.account_id
	where
	  k.status = 'Active' or k.status is null
	group by
	  u.arn,
	  u.name,
	  u.account_id,
	  u.tags,
	  u._ctx;
	
2023-07-03 14:23:32.415 UTC [INFO]  hub: goFdwBeginForeignScan, connection 'aws_001', table 'aws_s3_bucket', explain: false
2023-07-03 14:23:32.416 UTC [INFO]  hub: --------
2023-07-03 14:23:32.416 UTC [INFO]  hub: no quals
2023-07-03 14:23:32.416 UTC [INFO]  hub: --------
2023-07-03 14:23:32.416 UTC [INFO]  hub: goFdwBeginForeignScan, connection 'aws_001', table 'aws_s3_account_settings', explain: false
2023-07-03 14:23:32.417 UTC [INFO]  hub: --------
2023-07-03 14:23:32.417 UTC [INFO]  hub: no quals
2023-07-03 14:23:32.417 UTC [INFO]  hub: --------
2023-07-03 14:23:32.417 UTC [INFO]  hub: StartScan for table: aws_s3_bucket, cache enabled: false, iterator 0xc000b10fc0, 0 quals (1688394212941)
2023-07-03 14:23:34.552 UTC [INFO]  hub: StartScan for table: aws_s3_bucket, cache enabled: false, iterator 0xc0001592c0, 0 quals (168839421194)
2023-07-03 14:23:34.554 UTC [INFO]  hub: StartScan for table: aws_macie2_classification_job, cache enabled: false, iterator 0xc000163c80, 0 quals (1688394212574)
2023-07-03 14:23:35.379 UTC [INFO]  hub: StartScan for table: aws_s3_account_settings, cache enabled: false, iterator 0xc000b112c0, 0 quals (1688394212735)
2023-07-03 14:23:41.368 UTC [35831] LOG:  duration: 8951.437 ms  execute stmtcache_41: select
	  -- Required Columns
	  arn as resource,
	  case
	    when (bucket.block_public_acls or s3account.block_public_acls)
	      and (bucket.block_public_policy or s3account.block_public_policy)
	      and (bucket.ignore_public_acls or s3account.ignore_public_acls)
	      and (bucket.restrict_public_buckets or s3account.restrict_public_buckets)
	      then 'ok'
	    else 'alarm'
	  end as status,
	  case
	    when (bucket.block_public_acls or s3account.block_public_acls)
	      and (bucket.block_public_policy or s3account.block_public_policy)
	      and (bucket.ignore_public_acls or s3account.ignore_public_acls)
	      and (bucket.restrict_public_buckets or s3account.restrict_public_buckets)
	      then name || ' all public access blocks enabled.'
	    else name || ' not enabled for: ' ||
	      concat_ws(', ',
	        case when not (bucket.block_public_acls or s3account.block_public_acls) then 'block_public_acls' end,
	        case when not (bucket.block_public_policy or s3account.block_public_policy) then 'block_public_policy' end,
	        case when not (bucket.ignore_public_acls or s3account.ignore_public_acls) then 'ignore_public_acls' end,
	        case when not (bucket.restrict_public_buckets or s3account.restrict_public_buckets) then 'restrict_public_buckets' end
	      ) || '.'
	  end as reason
	  -- Additional Dimensions
	  
	  
	from
	  aws_s3_bucket as bucket,
	  aws_s3_account_settings as s3account
	where
	  s3account.account_id = bucket.account_id;
	
	
2023-07-03 14:23:41.628 UTC [35828] LOG:  duration: 9760.837 ms  execute stmtcache_39: select
	  -- Required Columns
	  arn as resource,
	  case
	    when versioning_mfa_delete then 'ok'
	    else 'alarm'
	  end status,
	  case
	    when versioning_mfa_delete then name || ' MFA delete enabled.'
	    else name || ' MFA delete disabled.'
	  end reason
	  -- Additional Dimensions
	  
	  
	from
	  aws_s3_bucket;
	
2023-07-03 14:23:41.777 UTC [35830] LOG:  duration: 9988.768 ms  execute stmtcache_37: select
	  -- Required Columns
	  arn as resource,
	  case
	    when server_side_encryption_configuration is not null then 'ok'
	    else 'alarm'
	  end status,
	  case
	    when server_side_encryption_configuration is not null then name || ' default encryption enabled.'
	    else name || ' default encryption disabled.'
	  end reason
	  -- Additional Dimensions
	  
	  
	from
	  aws_s3_bucket;
	
2023-07-03 14:23:41.780 UTC [35829] LOG:  duration: 9946.066 ms  execute stmtcache_38: with ssl_ok as (
	  select
	    distinct name,
	    arn,
	    'ok' as status
	  from
	    aws_s3_bucket,
	    jsonb_array_elements(policy_std -> 'Statement') as s,
	    jsonb_array_elements_text(s -> 'Principal' -> 'AWS') as p,
	    jsonb_array_elements_text(s -> 'Action') as a,
	    jsonb_array_elements_text(s -> 'Resource') as r,
	    jsonb_array_elements_text(
	      s -> 'Condition' -> 'Bool' -> 'aws:securetransport'
	    ) as ssl
	  where
	    p = '*'
	    and s ->> 'Effect' = 'Deny'
	    and ssl :: bool = false
	)
	select
	  -- Required Columns
	  b.arn as resource,
	  case
	    when ok.status = 'ok' then 'ok'
	    else 'alarm'
	  end status,
	  case
	    when ok.status = 'ok' then b.name || ' bucket policy enforces HTTPS.'
	    else b.name || ' bucket policy does not enforce HTTPS.'
	  end reason
	  -- Additional Dimensions
	  
	  
	from
	  aws_s3_bucket as b
	  left join ssl_ok as ok on ok.name = b.name;
	
2023-07-03 14:23:41.789 UTC [INFO]  hub: goFdwBeginForeignScan, connection 'aws_001', table 'aws_cloudtrail_trail', explain: false
2023-07-03 14:23:41.791 UTC [WARN]  hub: RestrictionsToQuals: failed to convert 1 restriction to quals
2023-07-03 14:23:41.792 UTC [INFO]  hub: --------
2023-07-03 14:23:41.792 UTC [INFO]  hub: no quals
2023-07-03 14:23:41.792 UTC [INFO]  hub: --------
2023-07-03 14:23:41.792 UTC [INFO]  hub: StartScan for table: aws_cloudtrail_trail, cache enabled: false, iterator 0xc0009bcc00, 0 quals (1688394221798)
2023-07-03 14:23:41.797 UTC [35832] LOG:  duration: 9794.955 ms  execute stmtcache_40: with bucket_list as (
	  select
	    trim(b::text, '"' ) as bucket_name
	  from
	    aws_macie2_classification_job,
	    jsonb_array_elements(s3_job_definition -> 'BucketDefinitions') as d,
	    jsonb_array_elements(d -> 'Buckets') as b
	)
	select
	  -- Required Columns
	  b.arn as resource,
	  case
	    when b.region = any(array['us-gov-east-1', 'us-gov-west-1']) then 'skip'
	    when l.bucket_name is not null then 'ok'
	    else 'alarm'
	  end status,
	  case
	    when b.region = any(array['us-gov-east-1', 'us-gov-west-1']) then b.title || ' not protected by Macie as Macie is not supported in ' || b.region || '.'
	    when l.bucket_name is not null then b.title || ' protected by Macie.'
	    else b.title || ' not protected by Macie.'
	  end reason
	  -- Additional Dimensions
	  
	  
	from
	  aws_s3_bucket as b
	  left join bucket_list as l on b.name = l.bucket_name;
	
2023-07-03 14:23:42.788 UTC [35830] ERROR:  deadlock detected at character 1035
2023-07-03 14:23:42.788 UTC [35830] DETAIL:  Process 35830 waits for AccessShareLock on relation 49016 of database 16384; blocked by process 35827.
	Process 35827 waits for AccessExclusiveLock on relation 49901 of database 16384; blocked by process 35830.
	Process 35830: with event_selectors_trail_details as (
	  select
	    distinct account_id
	  from
	    aws_cloudtrail_trail,
	    jsonb_array_elements(event_selectors) as e
	  where
	    (is_logging and is_multi_region_trail and e ->> 'ReadWriteType' = 'All')
	),
	advanced_event_selectors_trail_details as (
	  select
	    distinct account_id
	  from
	    aws_cloudtrail_trail,
	    jsonb_array_elements_text(advanced_event_selectors) as a
	  where
	  -- when readOnly = true, then it is readOnly, when readOnly = false then it is writeOnly, if advanced_event_selectors is not null then it is both ReadWriteType
	    (is_logging and is_multi_region_trail and advanced_event_selectors is not null and (not a like '%readOnly%'))
	)
	select
	  -- Required Columns
	  a.title as resource,
	  case
	    when d.account_id is null and ad.account_id is null then 'alarm'
	    else 'ok'
	  end as status,
	    case
	    when d.account_id is null and ad.account_id is null then 'cloudtrail disabled.'
	    else 'cloudtrail enabled.'
	  end as reason
	  -- Additional Dimensions
	Process 35827: drop schema if exists "aws_001" cascade;
	create schema "aws_001";
	comment on schema "aws_001" is 'steampipe plugin: hub.steampipe.io/plugins/turbot/aws@latest';
	grant usage on schema "aws_001" to steampipe_users;
	alter default privileges in schema "aws_001" grant select on tables to steampipe_users;
	grant select on all tables in schema "aws_001" to steampipe_users;
	import foreign schema "hub.steampipe.io/plugins/turbot/aws@latest" from server steampipe into "aws_001";
	
2023-07-03 14:23:42.788 UTC [35830] HINT:  See server log for query details.
2023-07-03 14:23:42.788 UTC [35830] STATEMENT:  with event_selectors_trail_details as (
	  select
	    distinct account_id
	  from
	    aws_cloudtrail_trail,
	    jsonb_array_elements(event_selectors) as e
	  where
	    (is_logging and is_multi_region_trail and e ->> 'ReadWriteType' = 'All')
	),
	advanced_event_selectors_trail_details as (
	  select
	    distinct account_id
	  from
	    aws_cloudtrail_trail,
	    jsonb_array_elements_text(advanced_event_selectors) as a
	  where
	  -- when readOnly = true, then it is readOnly, when readOnly = false then it is writeOnly, if advanced_event_selectors is not null then it is both ReadWriteType
	    (is_logging and is_multi_region_trail and advanced_event_selectors is not null and (not a like '%readOnly%'))
	)
	select
	  -- Required Columns
	  a.title as resource,
	  case
	    when d.account_id is null and ad.account_id is null then 'alarm'
	    else 'ok'
	  end as status,
	    case
	    when d.account_id is null and ad.account_id is null then 'cloudtrail disabled.'
	    else 'cloudtrail enabled.'
	  end as reason
	  -- Additional Dimensions
	  
	from
	  aws_account as a
	  left join event_selectors_trail_details as d on d.account_id = a.account_id
	  left join advanced_event_selectors_trail_details as ad on ad.account_id = a.account_id;
	

Improve the HCL parsing errors to include line numbers.

Is your feature request related to a problem? Please describe.
In some places, the error message returned by the HCL parsing doesn't provide line numbers.

Dashboard Error
Failed to decode all mod hcl files
failed to parse dependency: invalid property path 'none' passed to ParseResourcePropertyPath

This particular error was generated by this code:

      column "id" {
        display = none
      }

The fix was to put double quotes around none.

      column "id" {
        display = "none"
      }

In my case, I had two column blocks that had the same problem with the missing double quotes.

Describe the solution you'd like
Include which line the parsing error is happening on.

Other information
Just prior to getting the above error, I'd put some card blocks inside a query block where they do not belong. The HCL parse error message identified which line number the offending card blocks were on. I was able to resolve the problem in less than a minute.

Add ability to install a specific mod branch/commit with `steampipe mod install`

Is your feature request related to a problem? Please describe.
If I have some changes I'd like to test in a mod repository that are on a branch, but not in a tag yet, I'd like to be able to install it with steampipe mod install

Describe the solution you'd like
Be able to pass a branch name or commit ID to steampipe mod install

Describe alternatives you've considered
Create a temporary tag to pull down

Additional context
Add any other context or screenshots about the feature request here.

View benchmark tags in export results

Is your feature request related to a problem? Please describe.
If I have a mod whose benchmarks use controls from a mod dependency, when I run steampipe check ... --export=csv, only the control tags are in the results, not the benchmark tags.

I can use tags = merge(...) while in the mod, but since the controls are defined in a mod dependency, I can't get my benchmark tags into the control's tags.

Describe the solution you'd like
Have a way to view benchmark tags in export results (if a benchmark and control have conflicting tag keys, the control's should win).

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

Dashboard grouping incorrect if grouping config contains any entries before benchmark

All results are being incorrectly lumped under 1 of the benchmarks if that benchmark features in more than one of the parent groups e.g. by status if ok and info are the results and each of those statuses contains the same benchmark, all the control results will be lumped under one of those, not spread out according to their control result.

add support for passing in a `database` (formerly workspace_database), `search_path`, and `search_path_prefix` to child mods in the mod definition.

e want to add support for passing in a database (formerly workspace_database), search_path, and search_path_prefix to child mods in the mod definition. By default, they inherit the active database but can be overridden in the require block:

variable "duckdb_connection_string" {
  default = "duckdb:/home/ducks/mallard.db"
}

variable "flowpipe_connection_string" {
  default = "sqlite:/./my_mod/flowpipe.db"
}

mod "local" {
  require {
    mod "github.com/turbot/my_duckdb_mod {
      version     = ">=0.66.0"
      database    = var.duckdb_connection_string
      args = {
        foo = "bar"
      }
    }

    mod "github.com/turbot/flowpipe {
      version   = ">=0.66.0"
      database  = var.flowpipe_connection_string,
    }

    mod "github.com/turbot/aws_compliance {
      version            = ">=0.66.0"
      search_path_prefix = "aws_01"

    }
  }
}

When installing/updating transitive depdencies, without updating/installing their parents, the tree display of installed mods is not rendered correctly

All mods:

Kais-MacBook-Pro:custom_mod3 kai$ sp mod install

Installed 3 mods:

local
└── github.com/kaidaguerre/[email protected]
    └── github.com/kaidaguerre/[email protected]
        └── github.com/kaidaguerre/[email protected]

just steampipe-mod-m3

Kais-MacBook-Pro:custom_mod3 kai$ sp mod install

Installed 1 mod:

local


Kais-MacBook-Pro:custom_mod3 kai$ 

steampipe dashboard crashes on to_timestamp when value is millis not seconds

This value comes from JavaScript.

    table {
      sql = <<EOQ
        select to_timestamp(1670377316386)
      EOQ
    }

The crash:

 Dashboard execution started: fedwiki.dashboard.fedwiki
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x16b28c5]

goroutine 155 [running]:
github.com/turbot/steampipe/pkg/dashboard/dashboardserver.buildLeafNodeUpdatedPayload(0x0)
        /home/runner/work/steampipe/steampipe/pkg/dashboard/dashboardserver/payload.go:187 +0xa5
github.com/turbot/steampipe/pkg/dashboard/dashboardserver.(*Server).HandleDashboardEvent(0xc0008800f0, {0x22935e8, 0xc0004520c0}, {0x227f600?, 0x0?})
        /home/runner/work/steampipe/steampipe/pkg/dashboard/dashboardserver/server.go:149 +0x5b1
github.com/turbot/steampipe/pkg/workspace.(*Workspace).handleDashboardEvent(0xc0004e6000, {0x22935e8, 0xc0004520c0})
        /home/runner/work/steampipe/steampipe/pkg/workspace/workspace_events.go:71 +0x104
created by github.com/turbot/steampipe/pkg/workspace.(*Workspace).RegisterDashboardEventHandler
        /home/runner/work/steampipe/steampipe/pkg/workspace/workspace_events.go:47 +0xed

The fix:

    table {
      sql = <<EOQ
        select to_timestamp(1670377316386 / 1000)
      EOQ
    }

rename workspace_database -> database

We want to rename workspace_database -> database:
workspace arg workspace_database -> database
cli arg --workspace-database -> --database
env var POWERPIPE_WORKSPACE_DATABASE -> POWERPIPE_DATABASE

Introspection tables for other database systems

In steampipe, we depend on the temporary schema available on all database connections to create and populate the tables for mod introspection data.

Since the target database in powerpipe is abstracted away, we need to figure out a way to create and populate the introspection tables depending on the target database that powerpipe is connected to.

Labels are truncated in x-axis even when using `display = "always"` property

Describe the bug
On charts, even though I have axes set with:

      axes {
        x {
          labels {
            display = "always"
          }
        }
      }

in
Screen Shot 2023-04-20 at 9 15 55 AM
Screen Shot 2023-04-20 at 9 16 00 AM
graphs, sometimes I still see items on the x-axis not showing up:

Steampipe version (steampipe -v)
Example: v0.3.0

To reproduce
See above

Expected behavior
All labels should be shown

Additional context
Add any other context about the problem here.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.