google / cloud-forensics-utils Goto Github PK
View Code? Open in Web Editor NEWPython library to carry out DFIR analysis on the Cloud
License: Apache License 2.0
Python library to carry out DFIR analysis on the Cloud
License: Apache License 2.0
The idea is to pass a list of strings to StartAnalysisVm()
for the attach_volume
/ attach_disk
parameter instead of an AWSVolume
or a GoogleComputeDisk
object. The list of strings would ideally be a list of volume IDs/names to attach. This will make StartAnalysisVm()
more flexible (e.g. if wanting to use these methods in the CLI tool without defining volumes/disks in the first place)
Explore possibilities of retrieving system logs (e.g. /var/log/syslog) and other comon logs directly from libcloudforensics
$ python -m examples.aws_cli copyvolume --volume_id=VOLUMEID --src_account=default --dst_account=blah us-east-2a
Starting volume copy...
Traceback (most recent call last):
File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/local/google/home/mbrannock/virtualenv/cloud-forensics-utils/examples/aws_cli.py", line 106, in <module>
parsed_args.func(parsed_args)
File "/usr/local/google/home/mbrannock/virtualenv/cloud-forensics-utils/examples/aws_cli.py", line 34, in CreateVolumeCopy
src_account=args.src_account, dst_account=args.dst_account)
File "/usr/local/google/home/mbrannock/virtualenv/cloud-forensics-utils/libcloudforensics/aws.py", line 1272, in CreateVolumeCopy
snapshot, volume_name_prefix='evidence')
File "/usr/local/google/home/mbrannock/virtualenv/cloud-forensics-utils/libcloudforensics/aws.py", line 427, in CreateVolumeFromSnapshot
snapshot, volume_name_prefix=volume_name_prefix)
File "/usr/local/google/home/mbrannock/virtualenv/cloud-forensics-utils/libcloudforensics/aws.py", line 670, in _GenerateVolumeName
volume_name_prefix, snapshot.name[:truncate_at], volume_id_crc32)
TypeError: 'NoneType' object is not subscriptable
Make sure try/except
blocks only contain code useful in the context of the boto3 API calls and catch both client / waiter exceptions.
Similar to log2timeline/dftimewolf#176
Change method names to CamelCase
Add pylintrc to enforce style
Hijack the plaso-ci platform maybe? And have a script to test the "making a copy of a live VM and attach it to a new forensics instance"
libcloudforensics should be able to read/write S3 and GCS buckets.
Have the ability to pass log queries in raw to be able to create very specific recipes.
Be able to pull cloud logs:
YAPF should put every function parameter on a separate line for readability.
Add below to YAPF style file and create a separate re-style PR for all code.
SPLIT_ALL_COMMA_SEPARATED_VALUES = True
See this PR for more details https://github.com/log2timeline/dftimewolf/pull/257/files
See #28 (comment)
while not have_all_tokens:
kinda loops, across gcp.py and aws.py
With new capabilities brought by #70 and #119 and seeing how granular AWS IAM permissions can get when it comes to identifying the minimum set of required permissions to use the CreateVolumeCopy()
functionality in different scenarios, we need more e2e tests that verify these scenarios:
It'd be nice to be able to attach several disks in one go when creating the analysis VM. I.e. attach_disk
should be replaced by a list of GoogleComputeDisk
instead of a single item.
There's a bug in GoogleCloudProject.crreate_disk_from_snapshot()
with the disk's name creation. If the project name and the instance name are long strings, then truncate_at
becomes negative and the resulting disk_name
that is created may end up being longer than 63 chars. Additionally, len(project_id)
should not be taken out of truncate_at
since the project id itself is not used in the disk name.
The disk name creation code should be factored out of this method, and unit tested separately.
Edit 2020-06-04 [giovannt0]: GCP already offers this in CreateDiskCopy()
with the zone
parameter. As such, removing GCP label.
Refactor how we read install/test_requirements in setup.py and do not use internal pip structures due to pypa/pip#8188
As discussed, let's break down the compute.py
module into more granular modules so that we are closer to the orginal API: compute.disk
, compute.instance
...
Sometimes the ec2 browser client fails to install, resulting in the machine being locked out from external connexions.
Make sure docstrings are on par with https://github.com/log2timeline/l2tdocs/blob/master/process/Style-guide.md
When executing log queries it would be nice to have the option to have the output copied to a S3 or GCS storage bucket in a destination project.
Factor out the bash script inside gcp.py
into its own file.
Maybe use Sphinx? https://www.sphinx-doc.org/en/master/
When creating a disk copy the generated snapshot is not deleted in the source project.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.