Giter Site home page Giter Site logo

anitsh / til Goto Github PK

View Code? Open in Web Editor NEW
76.0 7.0 11.0 21 KB

Today I Learn (til) - Github `Issues` used as daily learning management system for taking notes and storing resource links.

Home Page: https://anitshrestha.com.np

License: MIT License

learn learning learning-by-doing notes note-taking notepad

til's Introduction

Software design is an art, and like any art it cannot be taught and learned as a precise science, by means of theorems and formulas. We can discover principles and techniques useful to be applied throughout the process of software creation, but we probably won’t ever be able to provide an exact path to follow from the real world need to code the module meant to serve that need. As the needs change, the systems has to adapt quickly, so we need to make the design flexible as the stiff design resists refactoring and the code that was not built with flexibility in mind will be hard to work.

The most basic general technique to design versatile system is by designing all the parts of the system, each and every component - highly cohesive and low coupled.

Design components that are self-contained, independent, and with a single, well-defined purpose.

— The Pragmatic Programmer

The most important skill in programming is problem decomposition. How we view a problem and break it down into pieces so that we can understand the tradeoffs, discreetly and as a whole, and build the solutions relatively independently, and easily, thinking about its maintainability as one of the primary objectives.

Ask me anything https://github.com/anitsh/ama/issues

til's People

Contributors

anitsh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

til's Issues

GraphQL

Software Design Patterns

A design pattern is the re-usable form of a solution to a design problem. It is a template or pattern describing a solution to a common problem. Thus the reuse of such patterns can help speed up the software development process.

In software engineering, a software design pattern is a general, reusable solution to a commonly occurring problem within a given context in software design. It is not a finished design that can be transformed directly into source or machine code. Rather, it is a description or template for how to solve a problem that can be used in many different situations. Design patterns are formalized best practices that the programmer can use to solve common problems when designing an application or system.

Object-oriented design patterns typically show relationships and interactions between classes or objects, without specifying the final application classes or objects that are involved. Patterns that imply mutable state may be unsuited for functional programming languages, some patterns can be rendered unnecessary in languages that have built-in support for solving the problem they are trying to solve, and object-oriented patterns are not necessarily suitable for non-object-oriented languages.

Design patterns may be viewed as a structured approach to computer programming intermediate between the levels of a programming paradigm and a concrete algorithm.

"Each pattern describes a problem which occurs over and over again in our environment, and then describes the core of the solution to that problem, in such a way that you can use this solution a million times over, without ever doing it the same way twice." -- ChristopherAlexander

"A design pattern systematically names, motivates, and explains a general design that addresses a recurring design problem in object-oriented systems. It describes the problem, the solution, when to apply the solution, and its consequences. It also gives implementation hints and examples. The solution is a general arrangement of objects and classes that solve the problem. The solution is customized and implemented to solve the problem in a particular context". - DesignPatternsBook

Given that patterns could be applied to many different disciplines, I would suggest that we talk about SoftwareDesignPatterns, to differentiate from ArchitecturalDesignPatterns or other kinds. Then the question is, are there any design patterns that work across specific disciplines? I doubt it, although there may be some "meta" patterns...

Resource

StatsD - Collecting Application Metrics

StatsD

  • What is StatsD exactly?
  • How does StatsD work?
  • What sets StatsD apart from the rest?
  • What problem does StatsD solve?

StatsD made collecting application metrics simpler for developers by instrumenting your code with specific metrics you want to observe.

StatsD is a network daemon released by Etsy and written in Node.js to collect, aggregate, and send developer-defined application metrics to a separate system for graphical analysis. Initially, the daemon’s job was to listen on a UDP port for incoming metrics data, parse and extract this information, and periodically send this data to Graphite in an aggregated format.

One big goal of StatsD is to collect data quickly. The better transport protocol for this is UDP. With UDP, the StatsD client can just send the metrics data and assume that it will get to the daemon, especially if it’s on the same instance.

The StatsD architecture consists of three main components: client, server, and backend.
The client implementation contains the libraries for the specific language you’re using for your application.
The server implementation includes a daemon that listens for UDP traffic coming from the client libraries. It then aggregates all their data and flushes everything to the backend system.
The backend component, which now includes more than Graphite, is where all of the metrics data will reside for graphing and analysis.

The basic metrics data that the StatsD client sends contains three things: a metric name, its value, and a metric type. This data is formatted this way:
<metric_name>:<metric_value>|<metric_type>

Resources:

Sugarlabs

https://github.com/sugarlabs

Sugar Labs: a community for learning and software-development

The Sugar development platform is available under the free/libre software GNU General Public License (GPL) to anyone who wants to extend it. “Sugar Labs” is a member project of the Software Freedom Conservancy (a non-profit foundation to produce and distribute and support the use of free software) and serves as a support base and gathering place for the community of educators and software developers who want to extend the platform and who have been creating Sugar-compatible applications.

Python Basics

Install Linux on Sony Aqua Mobile

image

Objective:

The primary objective was to Root the phone.
Current state is OS is erased.

Sony Phone Details

Name: Sony Xperia M4 Aqua
Codename : Tulip
Model : E2363
Operating system : Android 5.0 Lollipop (launch) Android 6.0.1 Marshmallow (current)
System on chip : Qualcomm Snapdragon 615
CPU: Octa-core (1.5 GHz quad-core Cortex-A53 & 1.0 GHz quad-core Cortex-A53)
GPU: Adreno 405
Memory: 2 GB RAM
Storage: 16 GB
Removable Storage:Up to 200 GB microSDXC
https://en.wikipedia.org/wiki/Sony_Xperia_M4_Aqua

Concepts

image

Bootloader
Bootloader is a piece of code that runs before any operating system is running. Bootloader are used to boot other operating systems, usually each operating system has a set of bootloaders specific for it. Bootloader is like BOIS to your computer. It is the first thing that runs when you boot up your Android device. It packages the instructions to boot operating system kernel.
"A bootloader is a vendor-proprietary image responsible for bringing up the kernel on a device. It guards the device state and is responsible for initializing the Trusted Execution Environment (TEE) and binding its root of trust.
The bootloader is comprised of many things including splash screen. To start boot, the bootloader may directly flash a new image into an appropriate partition or optionally use recovery to start the reflashing process that will match how it is done for over-the-air (OTA). Some device manufacturers create multi-part bootloaders and then combine them into a single bootloader.img file. At flash time, the bootloader extracts the individual bootloaders and flashes them all." - Google

Fastboot
The fastboot protocol is a mechanism for communicating with bootloaders over USB or ethernet. It is designed to be very straightforward to implement, to allow it to be used across a wide range of devices and from hosts running Linux, macOS, or Windows.
https://android.googlesource.com/platform/system/core/+/master/fastboot

Team Win Recovery Project (TWRP)
https://twrp.me/sony/sonyxperias.html

Android Open Source Project (AOSP)
https://source.android.com

Magisk
Magisk is a suite of open source tools for customizing Android, supporting devices higher than Android 4.2. It covers fundamental parts of Android customization: root, boot scripts, SELinux patches, AVB2.0 / dm-verity / forceencrypt removals etc.
https://github.com/topjohnwu/Magisk

Sony
https://developer.sony.com/develop/drivers
https://github.com/sonyxperiadev/kernel

Resource:

X Window System, X Server

https://en.wikipedia.org/wiki/X_Window_System
https://www.x.org
https://www.freedesktop.org

Context: How did I came to here?
I was working on Dell Inspiron N5050. OS Ubuntu 18 LTS. It became slow and unstable. So I decided to find other GUI so installded Lubuntu. It improved speed but was unstable. Then I wanted to how stable and speed it would be just to have CLI version of Ubuntu. So deleted both ubuntu-desktop and lubuntu-desktop.

While trying to start Ubuntu in CLI mode, there was an error
Failed to connect to lvmetad

With any button click, it led to the login prompt.

In the process, I ended up in startx, and let to this topic.

Simsons Game

Comment:
Interesting project.

About:
Simpsons: Hit & Run API
This code is in a pre-pre-pre-alpha experimental state.
This is a library to automate the abandonware game Simpsons: Hit & Run with JavaScript. It uses frida to access internal state, and exposes JavaScript classes that can be used to query and control the game.
https://github.com/taviso/sharapi

Kubernetes Basic Concepts

Kubernetes

The complexity of modern system

Critical software system are large and complex, and the complexity is increasing. Maintenance of such monoliths is a challenge. Software engineering community are addressing the challenge by adhering The Unix Philosophy and moving towards The Cloud.
Traditionally, applications used to run on one or more physical servers. The main drawback was that it did not scale well as resources were not utilized to the maximum. Virtual Machines(VM) solved the problem and also provided better security by providing isolated environment to . The nature of application started to be complex. Today we have Containerized applications that abstracted and enhanced the entire functionalities of VM's, making it exceptionally easy to maintain distributed systems.

Modernization Of Deployment Processes

Write programs that do one thing and do it well - The Unix Philosophy

Kubernetes Architecture

Kubernetes Overview


image

Kubernetes Objects:

  • Cluster is the pool of compute, storage, and network resources.
  • Node is a host machine running within the Cluster.
  • Namespace is the logical partitions of a Cluster.
  • Pod is the basic unit of deployment.
  • Labels are key-value pairs for identification and service discovery.
  • Services identifies a set of Pods using Label selectors.
  • Replication Sets ensures Pod's availability and scalability.
  • Deployment manages Pod's lifecycle.
  • Ingress exposes HTTP and HTTPS routes from outside the Cluster to Services.

Processes running in Kubernetes
kube-controller-manager A controller that runs and manages controller processes.
kube-apiserver It is the implementation of the Kubernetes API.
kube-scheduler watches for newly created Pods with no assigned node, and selects a node for them to run on.
kubelet Communicates with the Master.
kube-proxy A network proxy which reflects Kubernetes networking services on each node.

Internal Of Kubernetes Description

Container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. The container runtime is responsible for starting and managing containers.

Kubernetes is a powerful container orchestration system that can manage the deployment and operation of containerized applications across clusters of servers. In addition to coordinating container workloads, Kubernetes provides the infrastructure and tools necessary to maintain reliable network connectivity between your applications and services.

A Node is physical or virtual machine. Every cluster must have at least one Master Node which controls cluster, and one or many Worker Node that hosts Pod.

Kubernetes Objects are persistent entities in Kubernetes that defines everything in Kubernetes. All objects have unique names that allows idempotent creation and retrieval. These objects are stored in etcddatabase as a key-value pair. Objects can categorized as the Basic Objects which determines the deployed containerized application's workloads, their associated network and disk resources, and Higher Level Objects which are build upon the basic objects to provide additional functionality and convenience features to manage the workloads. Higher level objects have a long-running service-like lifecycle, except Jobs.

Basic Objects: Pod, Service, Volume and Namespace
Higher Level Objects: Replication Controllers, Replication Sets, Deployments, Stateful Sets, Daemon Sets ,JobsandCron Jobs`

Cluster is a group of interconnected Node. Cluster's state is defined Kubernetes Objects. Cluster's desired state includes what applications or other workloads to run, what container images they use, the number of replicas, what network and disk resources to make available.

Namespaces a way to divide cluster resources between users by creating multiple virtual clusters in same physical cluster. They are used in environments with many users spread across multiple teams, or projects. Namespaces can not be nested inside one another and each Kubernetes resource can only be in one Namespace. Objects in the same Namespace will have the same access control policies by default. Labels are used to distinguish resources within the same Namespace. Namespace resources are not themselves in a Namespace, and low-level resources, such as Nodes and PersistentVolumes, are not in any Namespace.
image

Pod represents a group of one or more Containers running together and operating closely as a single, monolithic application in a Node in the Cluster. Pods are managed entirely as a unit and share resources like environment, volumes and IP space. Pods consist of a main container which serves workload and optionally some helper containers that facilitate closely related tasks. For example, a Pod may have one container running the primary application server and a helper container pulling down files to the shared filesystem when changes are detected in an external repository. Pods are managed by higher level objects by providing template definitions.
Pods represent and hold a collection of one or more containers. Generally, if you have multiple containers with a hard dependency on each other, you package the containers inside a single pod.
image

Each individual worker node in the cluster runs two processes: kubelet and kube-proxy.

Service groups Pods together that perform the same function as a single entity. It keeps track of containers in the pods and routes to the containers for internal and external access. A service’s IP address remains stable regardless of changes to the pods it routes to which makes it easy to gain discoverability and can simplify containers designs. By default, services are only available using an internally routable IP address, they can be made available outside of the cluster by choosing one of several strategies.
Services use labels to determine what Pods they operate on. If Pods have the correct labels, they are automatically picked up and exposed by our services.
image
image

Kubernetes API is a resource-based (RESTful) programmatic interface provided via HTTP. It supports retrieving, creating, updating, and deleting primary resources via the standard HTTP verbs (POST, PUT, PATCH, DELETE, GET), includes additional subresources for many objects that allow fine grained authorization (such as binding a pod to a node), and can accept and serve those resources in different representations for convenience or efficiency. It also supports efficient change notifications on resources via "watches" and consistent lists to allow other components to effectively cache and synchronize the state of resources. It the communication medium for the end users, different parts of your cluster, and external components with one another. Most Kubernetes API resource types are Kubernetes Objects, but a smaller number of API resource types are represented by operations.

Controller is a non-terminating loop that regulates the state of a system. It watches the state of the cluster, then make or request changes where needed. Each controller tries to move the current cluster state closer to the desired state. There are different types of controllers for specific purposes.

Kubernetes Control Plane is a collection of the Controllers. kube-apiserver, kube-controller-manager and kube-scheduler are the three critical processes that makes up the control plane. Nodes that runs these processes are called Master Node which are replicated for availability and redundancy.

Volume is simply an abstraction of data in the form of file and directory within a Pod. It exists as long as its Pod exists.
image

Secret are used to share sensitive information, like SSH keys and passwords, with other Kubernetes objects within the same namespace.

Kubernetes Object Definition

Every Kubernetes Object definition is a YAML file that contains at least the following items:

  • apiVersion: The version of the Kubernetes API that the definition belongs to.
  • kind: The Kubernetes object this file represents. For example, a pod or service.
  • metadata: This contains the name of the object along with any labels that you may wish to apply to it.
  • spec: This contains a specific configuration depending on the kind of object you are creating, such as the container image or the ports on which the container will be accessible from.

Instead of a spec key, a Secret uses a data or stringData key to hold the required information. The data parameter holds base64 encoded data that is automatically decoded when retrieved. The stringData parameter holds non-encoded data that is automatically encoded during creation or updates, and does not output the data when retrieving Secrets.

Pods Management Controllers

Logically, each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process.

Node Controller: Responsible for noticing and responding when nodes go down.

Replication Controller: Responsible for maintaining the correct number of pods for every replication controller object in the system.

Endpoints Controller: Populates the Endpoints object (that is, joins Services & Pods).

Service Account & Token Controllers: Create default accounts and API access tokens for new namespaces

Deployments are most frequently used objects for stateless application which makes life cycle management of replicated Pods easier. It manages Pods as rolling updates, canary deploys and blue/green deployments. Deployments can be modified easily by changing the configuration and Kubernetes will adjust the replica sets, manage transitions between different application versions, and optionally maintain event history and undo capabilities automatically.

Stateful Sets are specialized pod controllers for stateful applications that offer ordering and uniqueness guarantees. Primarily it is used when systems that require stable network identifiers, stable persistent storage, and ordering guarantees like data-oriented applications, like databases, which need access to the same volumes even if rescheduled to a new node.

Replication Controller is responsible for ensuring that the number of Pods deployed in the cluster matches the number of pods in its configuration. If a Pod or underlying host fails, the Controller will start new pods to compensate. If the number of replicas in a Controller’s configuration changes, the Controller either starts up or kills Containers to match the desired number. Replication Controllers can also perform rolling updates to roll over a set of pods to a new version one by one, minimizing the impact on application availability. Deployments uses as it's build block.

Replication Sets are an iteration on the Replication Controller design with greater flexibility in how the controller identifies the Pods it is meant to manage. The only thing it does not do is rolling updates.

Daemon Sets are another specialized form of Pod Controller that run a copy of a Pod on each node in the cluster (or a subset, if specified). This is most often useful when deploying pods that help perform maintenance and provide services for the nodes themselves. For instance, collecting and forwarding logs, aggregating metrics, and running services that increase the capabilities of the node itself are popular candidates for daemon sets.

Jobs are useful when containers are expected to exit successfully after some time once they have completed their work.
Build on jobs,

Service Types

Kubernetes Services have 4 types, specified by the type field in the Service configuration file:

ClusterIP is the default, which grants the Service a stable internal IP accessible from anywhere inside of the cluster.
It is the default type means that this Service is only visible inside of the cluster.

NodePort configuration works by opening a static port on each node’s external networking interface. Traffic to the external port will be routed automatically to the appropriate pods using an internal cluster IP service. This will expose your Service on each Node at a static port, between 30000-32767 by default. When a request hits a Node at its Node IP address and the NodePort for your service, the request will be load balanced and routed to the application containers for your service.
It gives each node in the cluster an externally accessible IP.

LoadBalancer creates an external load-balancer to route to the service using a cloud provider’s Kubernetes load-balancer integration. The Cloud Controller Manager will create the appropriate resource and configure it using the internal service service addresses. This will create a load balancer using your cloud provider’s load balancing product, and configure a NodePort and ClusterIP for your Service to which external requests will be routed.
Creating LoadBalancer for each Deployment running in the cluster will create a new cloud load balancer for each Service, which can become costly. Ingress Controller is used to manage routing external requests to multiple services using a single load balancer.
It adds a load balancer from the cloud provider which forwards traffic from the service to Nodes within it.

ExternalName allows to map a Kubernetes Service to a DNS record. It can be used for accessing external services from Pods using Kubernetes DNS.

Label And Annotations

A Label is a semantic tag that are simple key-value pairs which can be attached to Kubernetes Objects to mark them as a part of a group. Labels should be used for semantic information useful to match a pod with selection criteria, annotations are more free-form and can contain less structured data. These can then be selected for when targeting different instances for management or routing.

Each of the controller-based objects use labels to identify the Pods that they should operate on. Services use labels to understand the backend Pods they should route requests to. Each unit can have more than one label, but each unit can only have one entry for each key. Usually, a “name” key is used as a general purpose identifier, but you can additionally classify objects by other criteria like development stage, public accessibility, application version, etc.

Annotations also allows to attach arbitrary key-value information to an object but are more free-form and can contain less structured data and are are a way of adding rich metadata to an object that is not helpful for selection purposes.

Storage Management

The lifecycle of a Volume is tied to the lifecycle of the Pod, but not to that of a Container. If a container within a Pod dies, the Volumepersists and the newly launched container will be able to mount the sameVolumeand access its data. When aPodgets restarted or dies, so do itsVolumes, although if the Volumesconsist of cloud block storage, they will simply be unmounted with data still accessible by futurePods`.

To preserve data across Pod restarts and updates, the PersistentVolume (PV) and PersistentVolumeClaim (PVC) objects are used.

StorageClass defines different types of storage offered which are categorized as "classes" setup by the Cluster Administrator. Different 'classes" might map to quality-of-service levels, or to backup policies, or to arbitrary policies. Kubernetes itself is unopinionated about what classes represent. This concept is sometimes called "profiles" in other storage systems.

PersistentVolume abstracts the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system that is provisioned manually by cluster admin or dynamically using Storage Classes. It is a resource in the cluster just like a node is a cluster resource. PersistentVolume are volume plugins like Volumes, but have a lifecycle independent of any individual Pod.

PersistentVolumeClaim is a request for storage by a user. It is similar to a Pod. Pods consume Node resources and PersistentVolumeClaim consume PersistentVolume resources. Pods can request specific levels of resources (CPU and Memory). PersistentVolumeClaim can request specific size and access modes (e.g., they can be mounted ReadWriteOnce, ReadOnlyMany or ReadWriteMany). PersistentVolumeClaim mounts the PV at the required path. The spec for a PVC contains the following items:

  • accessModes which vary by the use case. These are:
    • ReadWriteOnce – mounts the volume as read-write by a single node
    • ReadOnlyMany – mounts the volume as read-only by many nodes
    • ReadWriteMany – mounts the volume as read-write by many nodes
  • resources – the storage space that you require

image

Security And Policies

Security in Kubernetes is a big challenge as it is a composed many smaller standalone components. It provides many security mechanisms. Namespaces can be used for authentication, authorization and access control. Resource Quotas can be provided to avoid resource cannibalization. And Network Policies can be setup for proper segmentation and traffic control.

Networking

All the components of Kubernetes are interconnected. For the entire system to function efficiently, reliability and securely, networking plays critical role. The basic requirements of a Kubernetes network are:

  • all containers can communicate with all other containers without NAT
  • all nodes can communicate with all containers (and vice-versa) without NAT
  • the IP that a container sees itself as is the same IP that others see it as

Network Address Translation(NAT) is a method of remapping an IP address space into another by modifying network address information in the IP header of packets while they are in transit across a traffic routing device

Monitoring

Kubernetes includes some internal monitoring tools by default. These resources belong to its resource metrics pipeline, which ensures that the cluster runs as expected. The cAdvisor component collects network usage, memory, and CPU statistics from individual containers and nodes and passes that information to kubelet; kubelet in turn exposes that information via a REST API. The Metrics Server gets this information from the API and then passes it to the kube-aggregator for formatting.

References

Implementation/Usage Blogs

Docker

Docker

image

image

Docker Daemon: A constant background process that helps to manage/create Docker images, containers, networks, and storage volumes.
Docker Engine REST API: An API used by applications to interact with the Docker daemon; it can be accessed by an HTTP client.
Docker CLI: A Docker command line client for interacting with the Docker daemon. a.k.a the Docker command.

If we think differently we could just connect some problems with Docker:
As we all know Docker runs on a single process it could result into single point of failure.
All the child processes are owned by this process.
At any point if Docker daemon fails, all the child process losses their track
and enters into orphaned state.
Security vulnerabilities.
All the steps needs to be performed by root for Docker operations.


To understand why the Docker Daemon is running with root access and how this can become a problem, we first have to understand the Docker architecture (at least on a high level).

Container images are specified with the Dockerfile. The Dockerfile details how to build an image based on your application and resources. Using Docker, we can use the build command to build our container image. Once you have the image of your Dockerfile, you can run it. Upon running the image, a container is created.

The problem with this is that you cannot use Docker directly on your workstation. Docker is composed of a variety of different tools. In most cases, you will only interact with the Docker CLI. However, running an application with Docker means that you have to run the Docker Daemon with root privileges. It actually binds to a Unix socket instead of a TCP port. By default, users can only access the Unix socket using sudo command, which is owned by the user root.

The Docker Daemon is responsible for the state of your containers and images, and facilitates any interaction with “the outside world.” The Docker CLI is merely used to translate commands into API calls that are sent to the Docker Daemon. This allows you to use a local or remote Docker Daemon.

Running the Docker Daemon locally, you risk that any process that breaks out of the Docker Container will have the same rights as the host operating system. This is why you should only trust your own images that you have written and understand.

Resource:

Elastic Stack - Elasticsearch, Logstash, Kibana

API, API Design, REST API, API Gateway

Android Open Source Project (AOSP)

Firewall

Firewall

A firewall is designed to act as a protective system. It performs analysis of the metadata of network packets and allows or blocks traffic based upon predefined rules. This creates a boundary over which certain types of traffic or protocols cannot pass.

Since a firewall is an active protective device, it is more like an Intrusion Prevention System (IPS) than an Intrusion Detection Systems (IDS).

Resource

Reference

#167 #331 #360 #93

DevOps

The Mindset

A DevSecOps approach is based on the
idea that security is incorporated
throughout the entire software
development process, bringing together
app dev, operations and security teams.

Enable the business.

DevSecOps should be built in the part
of the development lifecycle. Get the
security outcomes your organization
needs:
• Reduce time to remediate security
vulnerabilities
• Improve visibility and control of container
contents and lifecycle
• Reduce toil for security, compliance,
and DevOps teams

  • VMware Tanzu Newsletter

Resource

Golang Basics

Resource

Apache Lucene, Solr

Lucene

https://lucene.apache.org
https://en.wikipedia.org/wiki/Apache_Lucene

The Apache LuceneTM project develops open-source search software. The project releases a core search library, named LuceneTM core, as well as the SolrTM search server.

Apache Lucene is a free and open-source search engine software library, originally written in Java by Doug Cutting.

Solr

Solr (pronounced "solar") is an open-source enterprise-search platform, written in Java. Its major features include full-text search, hit highlighting, faceted search, real-time indexing, dynamic clustering, database integration, NoSQL features and rich document (e.g., Word, PDF) handling. Providing distributed search and index replication, Solr is designed for scalability and fault tolerance. Solr is widely used for enterprise search and analytics use cases and has an active development community and regular releases.

Solr runs as a standalone full-text search server. It uses the Lucene Java search library at its core for full-text indexing and search, and has REST-like HTTP/XML and JSON APIs that make it usable from most popular programming languages. Solr's external configuration allows it to be tailored to many types of applications without Java coding, and it has a plugin architecture to support more advanced customization.

Apache Solr is developed in an open, collaborative manner by the Apache Solr project at the Apache Software Foundation.

In order to search a document, Apache Solr performs the following operations in sequence:

  • Indexing: converts the documents into a machine-readable format.
  • Querying: understanding the terms of a query asked by the user. These terms can be images or keywords, for example.
  • Mapping: Solr maps the user query to the documents stored in the database to find the appropriate result.
  • Ranking: as soon as the engine searches the indexed documents, it ranks the outputs by their relevance.

Solr is supported as an end point in various data processing frameworks and Enterprise integration frameworks.[citation needed]

Solr exposes industry standard HTTP REST-like APIs with both XML and JSON support, and will integrate with any system or programming language supporting these standards. For ease of use there are also client libraries available for Java, C#, PHP, Python, Ruby and most other popular programming languages.

Diff

Bash Shell Script

Shell Script

When to use Shell

Shell should only be used for small utilities or simple wrapper scripts.
While shell scripting isn’t a development language, it is used for writing various utility scripts throughout Google. This style guide is more a recognition of its use rather than a suggestion that it be used for widespread deployment.

Some guidelines:
If you’re mostly calling other utilities and are doing relatively little data manipulation, shell is an acceptable choice for the task.
If performance matters, use something other than shell.
If you are writing a script that is more than 100 lines long, or that uses non-straightforward control flow logic, you should rewrite it in a more structured language now. Bear in mind that scripts grow. Rewrite your script early to avoid a more time-consuming rewrite at a later date.
When assessing the complexity of your code (e.g. to decide whether to switch languages) consider whether the code is easily maintainable by people other than its author.

Debugging

Run a script as: bash option script and its parameters
bash -x – print commands before execution
bash -u – stop with error if undefined variable is used
bash -v – print script lines before execution
bash -n – do not execute commands

Variables, Arrays and Hashes

NAME=10 – set value to variable $NAME, ${NAME}
export NAME=10, typedef -x NAME – set as environment variable
D=$(date); D=`date` – variable contains output of command date
env, printenv – list all environment variables
set – list env. variables, can set bash options and flags shopt
unset name – destroy variable of function
typeset, declare – set type of variable
readonly variable – set as read only
local variable – set local variable inside function
${ !var}, eval \$$var – indirect reference
${parameter-word} – if parameter has value, then it is used, else word is used
${parameter=word} – if parameter has no value assing word. Doesn't work with $1, $2, ets.
${parameter:-word} – works with $1, $2, etc.
${parameter?word} – if parameter has value, use it; if no display word and exit script.
${parameter+word} – if parameter has value, use word, else use empty string
array=(a b c); echo ${array[1]} – print „b“
array+=(d e f) – append new item/array at the end
${array[*]}, ${array[@]} – all items of array
${#array[*]}, ${#array[@]} – number of array items
declare -A hash – create associative array (from version)
hash=([key1]=value ["other key2"]="other value") – store items
${hash["other key2"]}, ${hash[other key2]} – access
${hash[@]}, ${hash[*]} – all items
${!hash[@]}, ${!hash[*]} – all keys

Strings

STRING="Hello" – indexing: H0 e1 l2 l3 o4
STRING+=" world!" – concatenate strings
${#string}, expr length $string – string length
${string:position} – extract substring from position
${string:position:length} – extract substr. of length from position
${string/substring/substitution} – substitute first occurrence
${string//substring/substitution} – substitute all
${string/%substring/substitution} – substitute last occurrence
${string#substring} – erase shortest substring
${string##substring} – erase longest substring

Embedded variables

~, $HOME – home directory of current user
$PS1, $PS2 – primary, secundary user prompt
$PWD, ~+ / $OLDPWD, ~- – actual/previous directory
$RANDOM – random number generator, 0 – 32,767
$? – return value of last command
$$ – process id. of current process
$! – process id. of last background command
$PPID – process id. of parent process
$- – display of bash flags
$LINENO – current line number in executed script
$PATH – list of paths to executable commands
$IFS – Internal field separator. List of chars, that delimiter words from input, usually space, tabulator $'\t' and new line $'\n'.

Script command line parameters

$0, ${0} – name of script/executable
$1 to $9, ${1} to ${255} – positional command line parameters
PAR=${1:?"Missing parameter"} – error when ${1} is not set
PAR=${1:-default} – when ${1} is not set, use default value
$# – number of command line parameters (argc)
${!#} – the last command line parameter
$* – expand all parameters, "$*" = "$1 $2 $3…"
$@ – expand all parameters, "$@" = "$1" "$2" "$3"…
$_ – last parameter of previous command
shift – rename arguments, $2 to $1, $3 to $2, etc.; lower counter $#
xargs command – read stdin and put it as parameters of command

Read options from command line

while getopts "a:b" opt; do case $opt in
a) echo a = $OPTARG ;;
b) echo b ;;
?) echo "Unknown parameter!" ;;
esac; done
shift $(($OPTIND - 1)); echo "Last: $1"
\vskip -2mm

Control expressions

(commands), $(commands), `commands`, {commands;} – run in subshell
$(program), `program` – output of program replaces command
test, [ ] – condition evaluation:
    numeric comparison: a -eq b … a=b, a -ge b … a≥b, a -gt b … a>b, a -le b … a≤b, a -lt b … a<b
    file system: -d file is directory, -f file exists and is not dir., -r file exists and is readable, -w file exists and is writable, -s file is non-zero size, -a file exists
    logical: -a and, -o or, ! negation
[[ ]] – comparison of strings, equal =, non-equal !=, -z string is zero sized, -n string is non-zero sized, <, > lexical comparison
[ condition ] && [ condition ]
true – returns 0 value
false – returns 1 value
break – terminates executed cycle
continue – starts new iteration of cycle
eval parameters – executes parameters as command
exit value – terminates script with return value
. script, source script – reads and interprets another script
: argument – just expand argument or do redirect
alias name='commands' – expand name to commands
unalias name – cancel alias
if [ condition ]; then commands;
elif [ condition ]; then commands;
else commands; fi
for variable in arguments; do commands; done
    {a..z – expands to a b c … z
    {i..n..s} – sequence from i to n with step s
    \"{a,b,c}\" – expands to "a" "b" "c"
    {1,2}{a,b} – expands to 1a 1b 2a 2b
    seq start step end – number sequence
for((i=1; i<10; i++)); do commands; done
while returns true; do commands; done
until [ test returns true ]; do commands; done
case $prom in value_1) commands ;;
value_2) commands ;; *) implicit. commands ;;
esac
Function definition: function name () { commands; }
return value – return value of the function
declare -f function – print function declaration

Resource

Access iPhone from Ubuntu

  • Install libimobiledevice library - For iPhone and other iOS devices to be recognized on Ubuntu
    sudo apt install libimobiledevice6 libimobiledevice-utils

  • Pair to iPhone. Sometimes iPhone file system doesn’t mount automatically when connected.
    idevicepair pair

  • Allow for multiple connections between iPhone
    usbmuxd -f -v

  • Install iFuse package which allows to mount and access the file system on iOS devices.
    sudo apt install ifuse

  • Create a folder to mount iPhone
    mkdir /media/iphone

  • Mount iPhone to the dir
    ifuse /media/iphone

  • Unmount
    ifuse -u /media/iphone

Container, Containerization

Objectives:

  • What is a container?
  • Understand how containers work?

A container is an application bundle of lightweight components, such as application dependencies, libraries, and configuration files, that run in an isolated environment on top of traditional operating systems or in virtualized environments for easy portability and flexibility.

Containers

Linux containers are technologies that allows us to package and isolate applications with their entire runtime environment—all of the files necessary to run. This makes it easy to move the contained application between environments (dev, test, production, etc.) while retaining full functionality.

i.e. It provides a logical packaging mechanism in which applications can be abstracted from the environment in which they actually run which allows container-based applications to be deployed easily and consistently, regardless of whether the target environment is a private data center, the public cloud, or even a developer’s personal laptop.

Containers are not a sandbox. While containers have revolutionized how we develop, package, and deploy applications, running untrusted or potentially malicious code without additional isolation is not a good idea. The efficiency and performance gains from using a single, shared kernel also mean that container escape is possible with a single vulnerability.

Containers provide isolation between the application environment and the external host system, support a networked, service-oriented approach to inter-application communication, and typically take configuration through environmental variables and expose logs written to standard error and standard out.

Containers themselves encourage process-based concurrency and help maintain dev/prod parity by being independently scalable and bundling the process’s runtime environment.

Containers create consistent environments to rapidly develop and deliver cloud-native applications that can run anywhere.

Containers are also an important part of IT security. By building security into the container pipeline and defending your infrastructure, you can make sure your containers are reliable, scalable, and trusted.

Containers silo applications from each other unless you explicitly connect them. That means you don't have to worry about conflicting dependencies or resource contention — you set explicit resource limits for each service. Importantly, it's an additional layer of security since your applications aren't running directly on the host operating system.

Benefits Of Using Containers:

Portability: Apps developed in containers have everything they need to run and can be deployed in multiple environments, including private and public clouds. Portability means flexibility because you can more easily move workloads between environments and providers. 
Scalability: Containers have the ability to scale horizontally, meaning a user can multiply identical containers within the same cluster to expand when needed. By using and running only what you need when you need it, you can reduce costs dramatically. 
Efficiency: Containers require fewer resources than virtual machines (VMs) since they don’t need a separate operating system. You can run several containers on a single server and they require less bare-metal hardware, which means lower costs.
Increased security: Containers are isolated from each other, which means if one container is compromised, others won’t be affected. 
Speed: Because of their autonomy from the operating system, starting and stopping a container is a matter of seconds. This also allows for faster development and operational speed, as well as a faster, smoother user experience.

Containerization provides a clean separation of concerns, as developers focus on their application logic and dependencies, while IT operations teams can focus on deployment and management without bothering with application details such as specific software versions and configurations specific to the app.It reinforces many of the principles from #174 The Twelve-Factor App principles, allowing easy scaling and management.

Containers have garnered broad appeal through their ability to package an application and its dependencies into a single image that can be promoted from development, to test, and to production. Containers make it easy to ensure consistency across environments and across multiple deployment targets like physical servers, virtual machines (VMs), and private or public clouds. With containers, teams can more easily develop and manage the applications that deliver business agility.

Applications: Containers make it easier for developers to build and promote an application and its dependencies as a unit. Containers can be deployed in seconds. In a containerized environment, the software build process is the stage in the life cycle where application code is integrated with needed runtime libraries.

Infrastructure: Containers represent sandboxed application processes on a shared Linux® operating system (OS) kernel. They are more compact, lighter, and less complex than virtual machines and are portable across different environments—from on-premises to public cloud platforms.

Kubernetes is the container orchestration platform of choice for the enterprise. With many organizations now running essential services on containers, ensuring container security has never been more critical. This paper describes the key elements of security for containerized applications.

Containers make it easier for developers to build and promote an application and its dependencies as a unit. Containers also make it easy to get the most use of your servers by allowing for multitenant application deployments on a shared host. You can easily deploy multiple applications on a single host, spinning up and shutting down individual containers as needed. Unlike traditional virtualization, you do not need a hypervisor to manage guest operating systems on each VM. Containers virtualize your application processes, not your hardware.

Of course, applications are rarely delivered in a single container. Even simple applications typically have a frontend, a backend, and a database. And deploying modern microservices-based applications in containers means deploying multiple containers—sometimes on the same host and sometimes distributed across multiple hosts or nodes

When managing container deployment at scale, you need to consider:

Which containers should be deployed to which hosts?  
Which host has more capacity?  
Which containers need access to each other and how will they discover each other?  
How do you control access to and management of shared resources such as network and storage?  
How do you monitor container health?  
How do you automatically scale application capacity to meet demand?  
How do you enable developer self-service while also meeting security requirements?

You can build your own container management environment, which requires spending time integrating and managing individual components. Or you can deploy a container platform with built-in management and security features. This approach lets your team focus their energies on building the applications that provide business value rather than reinventing infrastructure.

When managing container deployment at scale, you need to consider:

  • Which containers should be deployed to which hosts?
  • Which host has more capacity?
  • Which containers need access to each other and how will they discover each other?
  • How do you control access to and management of shared resources such as network and storage?
  • How do you monitor container health?
  • How do you automatically scale application capacity to meet demand?
  • How do you enable developer self-service while also meeting security requirements?

Resources

#304

JavaScript

image

JavaScript a dialect of the ECMAScript language.
JavaScript is the coffee-flavored language with which I love to program. ECMAScript is the specification it’s based on.
By reading the ECMAScript specification, you learn how to create a scripting language.
By reading the JavaScript documentation, you learn how to use a scripting language.

A JavaScript engine is a program or interpreter that understands and executes JavaScript code.Synonyms JavaScript interpreter, JavaScript implementation. JavaScript engines are commonly found in web browsers, including V8 in Chrome, SpiderMonkey in Firefox, and Chakra in Edge. Each engine is like a language module for its application, allowing it to support a certain subset of the JavaScript language.A JavaScript engine to a browser is like language comprehension to a person.

A transpiler that can convert ES6 code to ES5 code, for example Babel.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.