For those who seeks help in different areas of Software and Hardware platform

These Twenty-two Android Apps Contains Malware

Experts at security firm Sophos are warning Android phone owners about 22 dodgy apps that drain battery life and could result in a big phone bill. The Sun reported the "click-fraud" apps pretend to be normal apps on the Google Play Store, but secretly perform criminal actions out of sight.
Share:

How To Set Up a Jupyter Notebook with Python 3 on Ubuntu 18.04

Jupyter Notebook is an open-source web application, allows you create and share interactive code, visualizations, and more. This tool can be used with several programming languages, including Python, Julia, R, Haskell, and Ruby. It is often used for working with data, statistical modeling, and machine learning.

This tutorial will show you how to set up Jupyter Notebook on Ubuntu 18.04.
Share:

How To Set Up Laravel, Nginx, and MySQL using Docker Compose on Ubuntu 18.04

This tutorial will take you through the steps to set up Laravel framework, with Nginx as the web server and MySQL as the database, everything inside Docker containers. We will set up the entire stack configuration in a docker-compose file, along with configuration files for PHP, MySQL, and Nginx.
Share:

Wireless access points from multiple vendors are potentially at risk

Internet of Things (IoT) security vendor Armis has found another set of Bluetooth flaws; this time the issues are in Texas Instruments chips that are used in widely deployed enterprise WiFi access points.

Bleedingbit was publicly announced by IoT security firm Armis on Nov. 1; it impacts Bluetooth Low Energy (BLE) chips made by Texas Instruments (TI) that are used in Cisco, Meraki and Aruba wireless access points. According to Armis, the impacted vendors were all contacted in advance of the disclosure so that patches could be made available.
Share:

Red Hat Enterprise Linux 7.6 Released with Improved Security

The latest release of Red Hat's flagship Linux platform adds TPM 2.0 support for security authentication, as well as integrating the open source nftables firewall technology.

Red Hat announced the general availability of its flagship Red Hat Enterprise Linux (RHEL) 7.6 release on Oct. 30, providing organizations with improved security, management and container features.  Among the enhanced features is support for the Trusted Platform Module (TPM) 2.0 specification for security authentication.
Share:

Mirantis Announces Cloud Computing to the Edge Without OpenStack

The Mirantis Cloud Platform Edge is a new Kubernetes-based effort to enable containers and virtual machines to run at the edge of the network. The concept of edge computing has been steadily evolving in recent years as a way to bring cloud computing type approaches to the edge of network deployments.
Share:

Dell Introduces New Rugged Enterprise Notebooks

The laptop, which start at $1,399, include two revamped models and an all-new thinner and lighter version for mobile workers. Dell has introduced three new rugged notebooks aimed at enterprise workers who require rugged machines that can withstand tough environmental and physical conditions, including water, dirt, drops and drastic temperature changes.
Share:

This why they are ahead of us

Singapore abolishes school exam rankings, says learning is not competition.

Whether a child finishes first or last will no longer be indicated in primary and secondary school report books from next year in Singapore, – a move which Education Minister Ong Ye Kung hopes will show students that “learning is not a competition”.
Share:

How To Set Up LEMP Stack (Linux, Nginx, MySQL, PHP) on Debian 9

The LEMP Stack is a group of software that can be used to serve dynamic web pages and web applications. This is an acronym that describes a Linux operating system, with an Nginx web server. The backend data is stored in the MySQL database and the dynamic processing is handled by PHP.

This tutorial will show you how to set up a LEMP Stack on a Debian.
Share:

Apple Unveils Latest iPhone XS, iPhone XS Max and iPhone XR

Apple on Sept. 12 introduced a new iPhones and an update of its Apple Watch with some improvements for its processing chip (A12 now) and its iOS operating system. Among the next generation of iPhones, one is by far the largest yet—the iPhone Xs Max, whose 6.5-inch screen is larger (but not wider) than any of its predecessors and all the Samsung Galaxy models. Samsung’s new Note 9 has a 6.4-inch screen; one has to wonder if Apple added the extra 1/10th of an inch just for bragging rights.
Share:

How To Install Node.JS on Debian 9 Server

Node.js is a JavaScript platform for general-purpose programming that allows users to build network applications quickly. By leveraging JavaScript on both the front and backend, Node.js makes development more consistent and integrated.

This tutorial will show you how to install Node.js on a Debian 9.
Share:

How to Jailbreak iOS 11.2 and 11.3.1

This tutorial will show you how to jailbreak iOS 11.2 and 11.3.1 using Electra1131. If you are ready to proceed with the jailbreak process, follow these steps:
Share:

How To Increase the Security and Usability on Debian 9

When you first install a new Debian 9 server, there are a few configuration steps that you should take early on as part of the initial setup. This will increase the security and usability of your server and will give you a solid foundation for subsequent actions.
Share:

How To Set Up PostgreSQL Logical Replication on Ubuntu 18.04

This guide will show you how to set up two-nodes logical replication with PostgreSQL on Ubuntu 18.04. One server will act as the master and the other as the replica. At the end of this guide, you will be able to replicate data from the master server to the replica using logical replication.
Share:

How to Set Up a Multi-Node MySQL High Availability Cluster on Ubuntu 18.04

The MySQL Cluster distributed database provides high availability and throughput for your MySQL database management system. A MySQL Cluster consists of one or more management nodes (ndb_mgmd) that store the cluster’s configuration and control the data nodes (ndbd), where cluster data is stored. After communicating with the management node, clients (MySQL clients, servers, or native APIs) connect directly to these data nodes.
Share:

How To Set Up Apache Kafka on CentOS/RHEL 7

This tutorial will show you how to install and use Apache Kafka 1.1.0 on CentOS 7.

Apache Kafka is a popular distributed message broker designed to efficiently handle large volumes of real-time data. A Kafka cluster is not only highly scalable and fault-tolerant, but it also has a much higher throughput compared to other message brokers such as ActiveMQ and RabbitMQ. Though it is generally used as a publish/subscribe messaging system, a lot of organizations also use it for log aggregation because it offers persistent storage for published messages.
Share:

How To Set Up Redis on Ubuntu 18.04

Redis is an in-memory key-value store known for its flexibility, performance, and broad language support. It is commonly used as a database, cache, and message broker, and supports a wide range of data structures.

This guide will show you how to set up Redis from source on Ubuntu 18.04.
Share:

How To Set Up Kubernetes Cluster using Kubeadm on CentOS/RHEL 7

This tutorial will show how to set up a Kubernetes cluster from scratch using Ansible and Kubeadm, on CentOS/RHEL 7. This guide will help you deploy a containerized Nginx application to it. 

Kubernetes is a container orchestration system that manages containers at scale. Initially developed by Google based on its experience running containers in production, Kubernetes is open source and actively developed by a community around the world.
Share:

How To Administer and Manage Windows Server 2019 Core using Admin Center

Windows Server Core installation option is good enough such that Windows administrators should standardize all of their servers to run as Core. It makes sense that the fewer operating systems you have present, the better your performance and the smaller your system attack surface. As of the 1709 update,Windows Server 2016 (semi-annual update channel) no longer allows you to add the GUI layer to a Server Core installation.

This tutorial will show you how to manage your Windows Server 2019 using Admin Center, PowerShell Core and sconfig utility.
Share:

How To Set Up Anaconda Python Distribution on Ubuntu 18.04

Anaconda is an open-source package manager, environment manager, and distribution of the Python and R programming languages. It is designed for data science and machine learning workflows, commonly used for large-scale data processing, scientific computing, and predictive analytics.

Anaconda is available in both free and paid enterprise versions. The Anaconda distribution ships with the conda command-line utility. This guide will show you how to install Python 3 of Anaconda on an Ubuntu 18.04.
Share:

How To Set Up LEMP Stack (Linux, Nginx, MySQL, PHP) on Ubuntu 18.04

The LEMP software stack is a group of software that can be used to serve dynamic web pages and web applications. This is an acronym that describes a Linux operating system, with an Nginx (pronounced like “Engine-X”) web server. The backend data is stored in the MySQL database and the dynamic processing is handled by PHP.

This tutorial will show how to install a LEMP stack on an Ubuntu 18.04.
Share:

How To Set Up Single-Sign-On (SSO) in Apache using Active Directory Federation Services

This tutorial will take you through the step to set up Single-Sign-On in Apache using Mellon and Active Directory federation services on CentOS/RHEL 7/8

Single sign-on (SSO) is a property of access control of multiple related, yet independent, software systems. With this property, a user logs in with a single ID and password to gain access to a connected system or systems without using different usernames or passwords, or in some configurations seamlessly sign on at each system.
Share:

How To Install and Secure MySQL on Ubuntu 18.04

MySQL is an open-source database management system, commonly installed as part of the popular LAMP (Linux, Apache, MySQL, PHP/Python/Perl) stack. It uses a relational database and SQL (Structured Query Language) to manage its data.

This guide will show you how to install and secure MySQL on Ubuntu 18.04.
Share:

How To Set Up PostgreSQL on Ubuntu 18.04

PostgreSQL, or Postgres, is a relational database management system that provides an implementation of the SQL querying language. It is a popular choice for many small and large projects and has the advantage of being standards-compliant and having many advanced features like reliable transactions and concurrency without read locks.

This tutorial will show you how to install Postgres and how to perform some basic database administration on Ubuntu 18.04.
Share:

How To Install Ubuntu 18.04 Desktop alongside Windows 10 in Dual Boot

This guide will walk you through the installation steps of Ubuntu 18.04 in dual boot with Windows 10.
Share:

How To Install Node.JS on Ubuntu 18.04


Node.js is a JavaScript platform for general-purpose programming that allows users to build network applications quickly. By leveraging JavaScript on both the front and backend, Node.js makes development more consistent and integrated.
Share:

How To Secure Apache Web Server using Let's Encrypt on Ubuntu 18.04


Let's Encrypt is a Certificate Authority (CA) that provides an easy way to obtain and install free TLS/SSL certificates, thereby enabling encrypted HTTPS on web servers. It simplifies the process by providing a software client, Certbot, that attempts to automate most (if not all) of the required steps. Currently, the entire process of obtaining and installing a certificate is fully automated on both Apache and Nginx.
Share:

How to Install Apache Web Server on Ubuntu 18.04


The Apache HTTP server is the most widely-used web server in the world. It provides many powerful features including dynamically loadable modules, robust media support, and extensive integration with other popular software.
Share:

How To Secure Nginx using Let's Encrypt on Ubuntu 18.04


Let's Encrypt is a Certificate Authority (CA) that provides an easy way to obtain and install free TLS/SSL certificates, thereby enabling encrypted HTTPS on web servers. It simplifies the process by providing a software client, Certbot, that attempts to automate most (if not all) of the required steps. Currently, the entire process of obtaining and installing a certificate is fully automated on both Apache and Nginx.
Share:

How To Set Up Nginx on Ubuntu 18.04


Nginx is one of the most popular web servers in the world and is responsible for hosting some of the largest and highest-traffic sites on the internet. It is more resource-friendly than Apache in most cases and can be used as a web server or reverse proxy.
Share:

How To Set Up Password-less SSH on Ubuntu 18.04


SSH, or secure shell, is an encrypted protocol used to administer and communicate with servers. When working with an Ubuntu server, chances are you will spend most of your time in a terminal session connected to your server through SSH.
Share:

How To Set Up LAMP (Linux, Apache, MySQL, PHP) on Ubuntu 18.04


A "LAMP" stack is a group of open source software that is typically installed together to enable a server to host dynamic websites and web apps. This term is actually an acronym which represents the Linux operating system, with the Apache web server. The site data is stored in a MySQL database, and dynamic content is processed by PHP.
Share:

Basic Server Setup with Ubuntu 18.04


When you first create a new Ubuntu 18.04 server, there are a few configuration steps that you should take early on as part of the basic setup. This will increase the security and usability of your server and will give you a solid foundation for subsequent actions.
Share:

How To Upgrade Ubuntu 16.04 to Ubuntu 18.04

The Ubuntu operating system's latest Long Term Support (LTS) release, Ubuntu 18.04 (Bionic Beaver), was released on April 26, 2018. This tutorial will explain how to upgrade an Ubuntu system of version 16.04 or later to Ubuntu 18.04.
Share:

What's New in Ubuntu 18.04

The Ubuntu operating system's most recent Long Term Support (LTS) release, version 18.04 (Bionic Beaver), was released on April 26, 2018. This article is intended as a brief overview of new features and significant changes to Ubuntu Server since the previous LTS release, 16.04 (Xenial Xerus). It synthesizes information from the official Bionic Beaver release notes and other sources.
Share:

How To Set Up Kubernetes Cluster using Kubeadm on Ubuntu 16.04

This guide will show you how to set up a Kubernetes cluster from scratch using Ansible and Kubeadm, and then deploy a containerized Nginx application to it.

Kubernetes is a container orchestration system that manages containers at scale. Initially developed by Google based on its experience running containers in production, Kubernetes is open source and actively developed by a community around the world.
Share:

How to Set Up Ansible on Ubuntu 16.04

Configuration management systems are designed to make controlling large numbers of servers easy for administrators and operations teams. They allow you to control many different systems in an automated way from one central location.

While there are many popular configuration management systems available for Linux systems, such as Chef and Puppet, these are often more complex than many people want or need. Ansible is a great alternative to these options because it has a much smaller overhead to get started.

This guide will show you how to install and use Ansible on Ubuntu 16.04.


How Does Ansible Work?
Ansible works by configuring client machines from an computer with Ansible components installed and configured.

It communicates over normal SSH channels in order to retrieve information from remote machines, issue commands, and copy files. Because of this, an Ansible system does not require any additional software to be installed on the client computers.

This is one way that Ansible simplifies the administration of servers. Any server that has an SSH port exposed can be brought under Ansible's configuration umbrella, regardless of what stage it is at in its life cycle.

Any computer that you can administer through SSH, you can also administer through Ansible.

Ansible takes on a modular approach, making it easy to extend to use the functionalities of the main system to deal with specific scenarios. Modules can be written in any language and communicate in standard JSON.

Configuration files are mainly written in the YAML data serialization format due to its expressive nature and its similarity to popular markup languages. Ansible can interact with clients through either command line tools or through its configuration scripts called Playbooks.


Prerequisites
To follow this guide, you will need One Ubuntu 16.04 server with a sudo non-root user and SSH keys.


Installing Ansible
To begin exploring Ansible as a means of managing our various servers, we need to install the Ansible software on at least one machine. We will be using an Ubuntu 16.04 server for this section.

The best way to get Ansible for Ubuntu is to add the project's PPA (personal package archive) to your system. We can add the Ansible PPA by typing the following command:

    sudo apt-add-repository ppa:ansible/ansible

Press ENTER to accept the PPA addition.

Next, we need to refresh our system's package index so that it is aware of the packages available in the PPA. Afterwards, we can install the software:

    sudo apt-get update
    sudo apt-get install ansible

As we mentioned above, Ansible primarily communicates with client computers through SSH. While it certainly has the ability to handle password-based SSH authentication, SSH keys help keep things simple. You can follow the tutorial linked in the prerequisites to set up SSH keys if you haven't already.

We now have all of the software required to administer our servers through Ansible.


Configuring Ansible Hosts
Ansible keeps track of all of the servers that it knows about through a "hosts" file. We need to set up this file first before we can begin to communicate with our other computers.

Open the file with root privileges like this:

    sudo nano /etc/ansible/hosts

You will see a file that has a lot of example configurations, none of which will actually work for us since these hosts are made up. So to start, let's comment out all of the lines in this file by adding a "#" before each line.

We will keep these examples in the file to help us with configuration if we want to implement more complex scenarios in the future.

Once all of the lines are commented out, we can begin adding our actual hosts.

The hosts file is fairly flexible and can be configured in a few different ways. The syntax we are going to use though looks something like this:

[group_name]
alias ansible_ssh_host=your_server_ip

The group_name is an organizational tag that lets you refer to any servers listed under it with one word. The alias is just a name to refer to that server.

So in our scenario, we are imagining that we have three servers we are going to control with Ansible. These servers are accessible from the Ansible server by typing:

    ssh root@your_server_ip

You should not be prompted for a password if you have set this up correctly. We will assume that our servers' IP addresses are 192.0.2.1, 192.0.2.2, and 192.0.2.3. We will set this up so that we can refer to these individually as host1, host2, and host3, or as a group as servers.

This is the block that we should add to our hosts file to accomplish this:

[servers]
host1 ansible_ssh_host=192.0.2.1
host2 ansible_ssh_host=192.0.2.2
host3 ansible_ssh_host=192.0.2.3

Hosts can be in multiple groups and groups can configure parameters for all of their members. Let's try this out now.

With our current settings, if we tried to connect to any of these hosts with Ansible, the command would fail (assuming you are not operating as the root user). This is because your SSH key is embedded for the root user on the remote systems and Ansible will by default try to connect as your current user. A connection attempt will get this error:

host1 | UNREACHABLE! => {
    "changed": false,
    "msg": "Failed to connect to the host via ssh.",
    "unreachable": true
}

On the Ansible server, we're using a user called demo. Ansible will try to connect to each host with ssh demo@server. This will not work if the demo user is not on the remote system.

We can create a file that tells all of the servers in the "servers" group to connect using the root user.

To do this, we will create a directory in the Ansible configuration structure called group_vars. Within this folder, we can create YAML-formatted files for each group we want to configure:

    sudo mkdir /etc/ansible/group_vars
    sudo nano /etc/ansible/group_vars/servers

We can put our configuration in here. YAML files start with "---", so make sure you don't forget that part.

ansible_ssh_user: root

Save and close this file when you are finished.

If you want to specify configuration details for every server, regardless of group association, you can put those details in a file at /etc/ansible/group_vars/all. Individual hosts can be configured by creating files under a directory at /etc/ansible/host_vars.


Using Simple Ansible Commands
Now that we have our hosts set up and enough configuration details to allow us to successfully connect to our hosts, we can try out our very first command.

Ping all of the servers you configured by typing:

    ansible -m ping all

host1 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}

host3 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}

host2 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}

This is a basic test to make sure that Ansible has a connection to all of its hosts.

The "all" means all hosts. We could just as easily specify a group:

    ansible -m ping servers

We could also specify an individual host:

    ansible -m ping host1

We can specify multiple hosts by separating them with colons:

    ansible -m ping host1:host2

The -m ping portion of the command is an instruction to Ansible to use the "ping" module. These are basically commands that you can run on your remote hosts. The ping module operates in many ways like the normal ping utility in Linux, but instead it checks for Ansible connectivity.

The ping module doesn't really take any arguments, but we can try another command to see how that works. We pass arguments into a script by typing -a.

The "shell" module lets us send a terminal command to the remote host and retrieve the results. For instance, to find out the memory usage on our host1 machine, we could use:

    ansible -m shell -a 'free -m' host1

host1 | SUCCESS | rc=0 >>
             total       used       free     shared    buffers     cached
Mem:          3954        227       3726          0         14         93
-/+ buffers/cache:        119       3834
Swap:            0          0          0


Wrapping up
By now, you should have your Ansible server configured to communicate with the servers that you would like to control. We have verified that Ansible can communicate with each host and we have used the ansible command to execute simple tasks remotely.
Share:

How To Set Up SSH Servers, Clients, and Key-Pair

SSH is a secure protocol used as the primary means of connecting to Linux servers remotely. It provides a text-based interface by spawning a remote shell. After connecting, all commands you type in your local terminal are sent to the remote server and executed there.

This can be used as a quick reference when you need to know how to do connect to or configure your server in different ways.
Share:

How To Set Up Docker on Ubuntu 16.04

Docker is an application that makes it simple and easy to run application processes in a container, which are like virtual machines, only more portable, more resource-friendly, and more dependent on the host operating system.

There are two methods for installing Docker on Ubuntu 16.04. One method involves installing it on an existing installation of the operating system. The other involves spinning up a server with a tool called Docker Machine that auto-installs Docker on it.

This guide will show you how to install and use Docker on Ubuntu 16.04.
Share:

How To Set Up LEMP Stack using Software Collections Repository on CentOS/RHEL 7


This tutorial will walk you through the steps to set up a LEMP stack on CentOS/RHEL 7.
Share:

Introducing Windows Defender System Guard Runtime Attestation


At Microsoft, we want users to be in control of their devices, including knowing the security health of these devices. If important security features should fail, users should be aware. Windows Defender System Guard runtime attestation, a new Windows platform security technology, fills this need.
Share:

How To Deploy a Firewall using a Free Open Source IPFire

IPFire is a hardened, versatile, state-of-the-art Open Source firewall based on Linux. This tutorial will show you how to deploy a firewall using free and open source IPFire.

Share:

How To Connect to Wi-Fi using Command-line in Ubuntu/CentOS/RHEL

There are several tools for managing a wireless network interface on Linux but my favorite one is nmcli, a command-line tool used to create, show, edit, delete, enable, and disable network connections, as well as control and display network device status.

This tutorial will show you how to connect and manage Wi-Fi network interfaces using nmcli tool on any Linux distribution.
Share:

How to Set Up SSH Password-less Authentication on Ubuntu 16.04

Ubuntu 16.04 SSH Keys

SSH, or secure shell, is an encrypted protocol used to administer and communicate with servers. When working with an Ubuntu server, chances are you will spend most of your time in a terminal session connected to your server through SSH.

This tutorial will show you how to set up password-less authentication using SSH key-pair on Ubuntu 16.04.
Share:

How To Set Up SSH Password-less Authentication on CentOS/RHEL 7

CentOS 7 Server SSH Keys Set Up

SSH, or secure shell, is an encrypted protocol used to administer and communicate with servers. When working with a CentOS server, chances are, you will spend most of your time in a terminal session connected to your server through SSH.

This guide will show you how to generate SSH key-pair to set up password-less authentication on CentOS/RHEL 7.
Share:

How To Improve Your Website Response using WebP Images

For this guide, we'll use the command-line tool cwebp to convert images into WebP format, creating scripts that will watch and convert images in a specific directory. Lastly, we'll show you two ways to serve WebP images to your visitors.
Share:

Microsoft Releases Windows Admin Center Tool


Microsoft has officially released Windows Admin Center, a free tool that promises to streamline how administrators manage their Windows Server and Windows 10 systems.
Share:

How To Block Unwanted SSH Login Attempts using PyFilter on Ubuntu 16.04

This guide will walk you through the steps to set up PyFilter on Ubuntu 16.04.

SSH is a cryptographic network protocol for operating network services securely. It's typically used for remote control of a computer system or for transferring files. When SSH is exposed to the public internet, it becomes a security concern. For example, you'll find bots attempting to guess your password via brute force methods.
Share:

Microsoft Announces Windows Server 17093 Preview

Windows Server preview build 17093 and Project Honolulu 1802 are now available to Windows Server Insiders, members of Microsoft's early-access and feedback program.
Share:

How To Perform I/O Troubleshooting using Condusiv I/O Assessment Tool

Condusiv offers a free tool that allows you to examine overall I/O performance and ranks systems in red, yellow, green to quickly identify which systems suffer from I/O issues and how badly – and which don’t.
Share:

How To Place a Workstation Out of Order Remotely using PowerShell

This guide will show you how to put a workstation out of order remotely using Active Directory and PowerShell.

If you work in an environment that has public computing sites the workstations deployed to those sites tend to have issues quickly because of high foot traffic. While many IT departments use some sort of signage to deter patrons from using problematic workstations, patrons often ignore those signs. Thus, it makes sense to put a workstation out of order, which ensures patrons are not using an unstable workstation.
Share:

A New Research Shows Mosquitoes Can Be Trained Not to Bite Specific Humans

Mosquitoes transmit dangerous diseases, killing more people than any other animal and prompting the recent War on Mosquitoes. A new study shows that a method for training mosquitoes not to bite specific humans could be as effective as using insect repellents like DEET.
Share:

How to Install Swift and Vapor on Ubuntu 16.04

Swift is a programming language from Apple. It's fast, safe, and modern, and it has an enormous community backing the language. Swift is used primarily for developing iOS and macOS applications, but as of Swift 3, you can use it for server-side application development as well.

Vapor is a popular server-side Swift web framework. Like Swift, Vapor is fast and modern, and it supports many of the features that you'll see in web frameworks for other programming languages.

This guide will show you how to install Swift and Vapor on Ubuntu 16.04.
Share:

How to implement effective alerting and monitoring strategy

monitoring and alerting

Monitoring systems help increase visibility into your infrastructure and applications and define acceptable ranges of performance and reliability. By understanding which components to measure and the most appropriate metrics to focus on for different scenarios, you can begin to plan a monitoring strategy that covers all critical parts of your services.

In this article, we will discuss about the components that make up a monitoring system and how to use them to implement your monitoring strategy. We will begin by reviewing the basic responsibilities of an effective, reliable monitoring system. Afterwards, we will cover how the elements of a monitoring system fulfill those functional requirements. Then, we'll talk about how best to translate your monitoring policies into dashboards and alert policies that provide your team with the information they need without requesting their attention at unwarranted times.


Parts of a Monitoring System
Monitoring systems are comprised of a few different components and interfaces that all work together to collect, visualize, and report on the health of your deployment. We will cover the basic individual parts below.


Distributed Monitoring Agents and Data Exporters
While the bulk of the monitoring system might be deployed to a dedicated server or servers, data needs to be gathered from many different sources throughout your infrastructure. To do this, a monitoring agent—a small application designed to collect and forward data to a collection endpoint—is installed on each individual machine throughout the network. These agents gather statistics and usage metrics from the host where they are installed and send them to the central monitoring software.

Agents run as always-on daemons on each host throughout the system. They may include a basic configuration to authenticate securely with the remote data endpoint, define the data frequency or sampling policies, and set unique identifiers for the hosts' data. To reduce the impact on other services, the agent must use minimal resources and be able to operate with little to no management. Ideally, it should be trivial to install an agent on a new node and begin sending metrics to the central monitoring system.

Monitoring agents typically collect generic, host-level metrics, but agents to monitor software like web or database servers are available as well. For most specialized types of software, however, data will have to be collected and exported by either modifying the software itself, or building your own agent by creating a service that parses the software's status endpoints or log entries. Many popular monitoring solutions have libraries available to make it easier to add custom instrumentation to your services. As with agent software, care must be taken to ensure that your custom solutions minimize their footprint to avoid impacting the health or performance of your applications.

So far, we've made some assumptions about a push-based architecture for monitoring, where the agents push data to a central location. However, pull-based designs are also available. In pull-based monitoring systems, individual hosts are responsible for gathering, aggregating, and serving metrics in a known format at an accessible endpoint. The monitoring server polls the metrics endpoint on each host to gather the metrics data. The software that collects and presents the data through the endpoint has many of the same requirements as an agent, but often requires less configuration since it does not need to know how to access other machines.


Metrics Ingress
One of the busiest part of a monitoring system at any given time is the metrics ingress component. Because data is constantly being generated, the collection process needs to be robust enough to handle a high volume of activity and coordinate with the storage layer to correctly record the incoming data.

For push-based systems, the metrics ingress endpoint is a central location on the network where each monitoring agent or stats aggregator sends its collected data. The endpoint should be able to authenticate and receive data from a large number of hosts simultaneously. Ingress endpoints for metrics systems are often load balanced or distributed at scale both for reliability and to keep up with high volumes of traffic.

For pull-based systems, the corresponding component is the polling mechanism that reaches out and parses the metrics endpoints exposed on individual hosts. This has some of the same requirements, but some responsibilities are reversed. For instance, if individual hosts implement authentication, the metrics gathering process must be able to provide the correct credentials to log in and access the secure endpoint.


Data Management Layer
The data management layer is responsible for organizing and recording incoming data from the metrics ingress component and responding to queries and data requests from the administrative layers. Metrics data is usually recorded in a format called a time series which represents changes in value over time. Time series databases—databases that specialize in storing and querying this type of data—are frequently used within monitoring systems.

The data management layer's primary responsibility is to store incoming data as it is received or collected from hosts. At a minimum, the storage layer should record the metric being reported, the value observed, the time the value was generated, and the host that produced it.

For persistence over longer periods of time, the storage layer needs to provide a way to export data when the collection exceeds the local limitations for processing, memory, or storage. As a result, the storage layer also needs to be able to import data in bulk to re-ingest historic data into the system when necessary.

The data management layer also needs to provide organized access to the stored information. For systems using time series databases, this functionality is provided by built-in querying languages or APIs. These can be used for interactive querying and data exploration, but the primary consumers will likely be the data presentation dashboards and the alert system.


Visualization and Dashboard Layer
Built on top of the data management layer are the interfaces that you interact with to understand the data being collected. Since metrics are time series data, data is best represented as a graph with time on the x-axis. This way, you can easily understand how values change over time. Metrics can be visualized over various time scales to understand trends over long periods of time as well as recent changes that might be affecting your systems currently.

The visualization and data management layers are both involved in ensuring that data from various hosts or from different parts of your application stack can be overlaid and viewed holistically. Luckily, time series data provides a consistent scale which helps identify events or changes that happened concurrently, even when the impact is spread across different types of infrastructure. Being able to select which data to overlay interactively allows operators to construct visualizations most useful for the task at hand.

Commonly used graphs and data are often organized into saved dashboards. These are useful in a number of contexts, either as a continual representation of current health metrics for always-on displays, or as focused portals for troubleshooting or deep diving into specific areas of your system. For instance, a dashboard with a detailed breakdown of physical storage capacity throughout a fleet can be important when capacity planning, but might not need to be referenced for daily administration. Making it easy to construct both generalized and focused dashboards can help make your data more accessible and actionable.


Alerting and Threshold Functionality
While graphs and dashboards will be your go-to tools for understanding the data in your system, they are only useful in contexts where a human operator is viewing the page. One of the most important responsibilities of a monitoring system is to relieve team members from actively watching your systems so that they can pursue more valuable activities. To make this feasible, the system must be able to ask for your attention when necessary so that you can be confident you will be made aware of important changes. Monitoring systems use user-defined metric thresholds and alert systems to accomplish this.

The goal of the alert system is to reliably notify operators when data indicates an important change and to leave them alone otherwise. Since this requires the system to know what you consider to be a significant event, you must define your alerting criteria. Alert definitions are composed of a notification method and a metric threshold that the system continuously evaluates based on incoming data. The threshold usually defines a maximum or minimum average value for a metric over a specified time frame while the notification method describes how to send out the alert.

One of the most difficult parts of alerting is finding a balance that allows you to be responsive to issues while not over alerting. To accomplish this, you need to understand which metrics are the best indications of real problems, which problems require immediate attention, and what notification methods are best suited for different scenarios. To support this, the threshold definition language must be powerful enough to adequately describe your criteria. Similarly, the notification component must offer methods of communicating appropriate for various levels of severity.


Black-Box and White-Box Monitoring
Now that we've described how various parts of the monitoring system contribute to improving visibility into your deployment, we can talk about some of the ways that you can define thresholds and alerts to best serve your team. We'll begin by discussing the difference between black-box and white-box monitoring.

Black-box and white-box monitoring describe different models for monitoring. They are not mutually exclusive, so often systems use a mixture of each type to take advantage of their unique strengths.

Black-box monitoring describes an alert definition or graph based only on externally visible factors. This style of monitoring takes an outside perspective to maintain a focus on the public behavior of your application or service. With no special knowledge of the health of the underlying components, black-box monitoring provides you with data about the functionality of your system from a user perspective. While this view might seem restrictive, this information maps closely to issues that are actively affecting customers, so they are good candidates for alert triggers.

The alternative, white-box monitoring, is also incredibly useful. White-box monitoring describes any monitoring based on privileged, inside information about your infrastructure. Because the amount of internal processes vastly exceeds the externally visible behavior, you will likely have a much higher proportion of white-box data. And since it operates with more comprehensive information about your systems, white-box monitoring has the opportunity to be predictive. For instance, by tracking changes in resource use, it can notify you when you may need to scale certain services to meet new demand.

Black-box and white-box are merely ways of categorizing different types of perspectives into your system. Having access to white-box data, where the internals of your system are visible, is helpful in investigating issues, assessing root causes, and finding correlated factors when an issue is known or for normal administration purposes. Black-box monitoring, on the other hand, helps detect severe issues quickly by immediately demonstrating user impact.


Matching Severity with Alert Type
Alerting and notifications are some of the most important parts of your monitoring system to get right. Without notifications about important changes, your team will either not be aware of events impacting your systems or will need to actively monitor your dashboards to stay informed. On the other hand, overly aggressive messaging with a high percentage of false positives, non-urgent events, or ambiguous messaging can do more harm than good.

In this section, we'll talk about different tiers of notifications and how to best use each to maximize their effectiveness. Afterwards, we'll discuss some criteria for choosing what to alert on and what the notification should accomplish.


Pages
Starting with the highest priority alert type, pages are notifications that attempt to urgently call attention to a critical issue with the system. This category of alert should be used for situations that demand immediate resolution due to their severity. A reliable, aggressive way of reaching out to people with the responsibility and power to work on resolving the problem is required for the paging system.

Pages should be reserved for critical issues with your system. Because of the type of issues they represent, they are the most important alerts your system sends. Good paging systems are reliable, persistent, and aggressive enough that they cannot be reasonably ignored. To ensure a response, paging systems often include an option to notify a secondary person or group if the first page is not acknowledged within a certain amount of time.

Because pages are, by nature, incredibly disruptive, they should be used sparingly: only when it is clear that there is an operationally unacceptable problem. Often, this means that pages are tied to observed symptoms in your system using black-box techniques. While it might be difficult to determine the impact of a backend web host maxing out connections, the significance of your domain being unreachable is much less ambiguous and might demand a page.


Secondary Notifications
Stepping down in severity are notifications like emails and tickets. These are designed to leave a persistent reminder that operators should investigate a developing situation when they are in a good position to do so. Unlike pages, notification-style alerts are not meant to indicate that immediate action is required, so they are typically handled by working staff rather than alerting an on-call employee. If your business does not have administrators working at all times, notifications should be aligned to situations that can wait until the next working day.

Tickets and emails generated by monitoring help teams understand the work they should be focusing on when they're next active. Because notifications should not be used for critical issues currently affecting production, they are frequently based on white-box indicators that can predict or identify evolving issues that will need to be resolved soon.

Other times, notification alerts are set to monitor the same behavior as paging alerts, but set to lower, less critical thresholds. For instance, you might define a notification alert when your application is showing a small increase in latency over a period of time and have a corresponding page sent when the latency grows to an unreasonable amount.

In general, notifications are most appropriate in situations that require a response, but don't pose an immediate threat to the stability of your system. In these cases, you want to bring awareness to an issue so that your team can investigate and mitigate before it impacts users or transforms to a larger problem.


Logging Information
While not technically an alert, sometimes you may wish to note specific observed behavior in a place you can easily access later without bringing it to anyone's attention immediately. In these situations, setting up thresholds that will simply log information can be useful. These can be written to a file or used to increment a counter on a dashboard within your monitoring system. The goal is to provide readily compiled information for investigative purposes to cut down on the number of queries operators must construct to gather information.

This strategy only makes sense for scenarios that are very low priority and need no response on their own. Their largest utility is correlating related factors and summarizing point-in-time data that can be referenced later as supplemental sources. You will probably not have many triggers of this type, but they might be useful in cases where you find yourself looking up the same data each time an issue comes up. Alternatives that provide some of the same benefits are saved queries and custom investigative dashboards.


When To Avoid Alerting
It's important to be clear on what alerts should indicate to your team. Each alert should signify that a problem is occurring that requires manual human action or input on a decision. Because of this focus, as you consider metrics to alert on, note any opportunities where reactions could be automated.

Automated remediation can be designed in cases where:

A recognizable signature can reliably identify the problem
The response will always be the same
The response does not require any human input or decision making

Some responses are simpler to automate than others, but generally, any scenario that fits the above criteria can be scripted away. The response can still be tied to alert thresholds, but instead of sending a message to a person, the trigger can kick off the scripted remediation to solve the problem. Logging each time this occurs can provide valuable information about your system health and the effectiveness of your metric thresholds and automated measures.

It's important to keep in mind that automated processes can experience problems as well. It is a good idea to add extra alerting to your scripted responses so that an operator is notified when automation fails. This way, a hands-off response will handle the majority of cases and your team will be notified of incidents that require intervention.


Designing Effective Thresholds and Alerts
Now that we've covered the different alert mediums available and some of the scenarios that are appropriate for each, we can talk about the characteristics of good alerts.


Triggered by Events with Real User Impact
As mentioned previously, alerts based on scenarios with real user impact are best. This means analyzing different failure or performance degrading scenarios and understanding how and when they may bubble up to layers that users interact with.

This requires a good understanding of your infrastructure redundancy, the relationship of different components, and your organization's goals for availability and performance. Your aim is to discover the symptomatic metrics that can reliably indicate present or impending user-impacting problems.


Thresholds with Graduated Severity
After you've identified symptomatic metrics, the next challenge is identifying the appropriate values to use as thresholds. You might have to use trial and error to discover the right thresholds for some metrics.

If available, check historic values to determine what scenarios required remediation in the past. For each metric, it's good to define an "emergency" threshold that will trigger a page and one or several "canary" thresholds that are associated with lower priority messaging. After defining new alerts, ask for feedback on whether the thresholds were overly aggressive or not sensitive enough so that you can fine tune the system to best align to your team's expectations.


Contain Appropriate Context
Minimizing the time it takes for responders to begin investigating issues helps you recover from incidents faster. To this end, it is useful to try to provide context within the alert text so operators can understand the situation quickly and start working on appropriate next steps.

Alerts should clearly indicate the components and systems affected, the metric threshold that was triggered, and the time that the incident began. The alert should also provide links that can be used to get further information. These may be links to specific dashboards associated with the triggered metric, links to your ticketing system if automated tickets were generated, or links to your monitoring system's alerts page where more detailed context is available.

The goal is to give the operator enough information to guide their initial response and help them focus on the incident at hand. Providing every piece of information you have about the event is neither required nor recommended, but giving basic details with a few options for where to go next can shorten the initial discovery phase of your response.


Sent to the Right People
Alerts are not useful if they are not actionable. Often, whether an alert is actionable depends on the level of knowledge, experience, and permission that the responding individual has. For organizations of a certain size, deciding on the appropriate person or group to message is straightforward in some cases and ambiguous in others. Developing an on-call rotation for different teams and designing a concrete escalation plan can remove some of the ambiguity in these decisions.

The on-call rotations should include enough capable individuals to avoid burnout and alert fatigue. It is best if your alerting system includes a mechanism for scheduling on-call shifts, but if not, you can develop procedures to manually rotate the alert contacts based on your schedules. You may have multiple on-call rotations populated by the owners of specific parts of your systems.

An escalation plan is a second tool to make sure incidents go to the correct people. If you have staff covering your systems 24 hours a day, it is best to send alerts generated from the monitoring system to on-shift employees rather than the on-call rotation. The responders can then perform mitigation themselves or decide to manually page on-call operators if they need additional help or expertise. Having a plan that outlines when and how issues are escalated can minimize unnecessary alerts and preserve the sense of urgency that pages are meant to represent.


Conclusion
In this article, we've discussed about how monitoring and alerting work in real systems. We began by looking at how the different parts of a monitoring system work to fulfill organizational needs for awareness and responsiveness. We discussed the difference between black- and white-box monitoring as a framework for thinking about different alerting cues. Afterwards, we discussed different types of alerts and how best to match incident severity with an appropriate alert medium. Lastly, we covered the characteristics of an effective alert process to help you design a system that increases your team's responsiveness.
Share:

How to Protect Your Computers From Spectre, Meltdown Vulnerabilities


For some users protecting your systems against a potential security threats from two processor design vulnerabilities will be straightforward, but for others more complicated.

The first thing you have to know in regards to the two processor vulnerabilities affecting Intel and other makers is that there are currently no exploits out there in the malware world right now. This means that if you can’t find a fix for the Spectre or Meltdown vulnerabilities for your organization’s computers, you don’t have to panic—yet. 





But that doesn’t mean you shouldn’t start working on a permanent solution to the problem, because it’s very real, and eventually it’s likely that someone, somewhere, will find a way to use the vulnerabilities to hack into something. 

Both of the vulnerabilities are present in Intel chips and have been since 1995. However, it would be wrong to consider either a bug or a design flaw, because they used the features behind the vulnerabilities to enhance performance. 

Meltdown is based on support for memory sharing between the kernel and an application. Spectre is based in speculative execution, a technique in which the processor assumes what the next CPU instruction will be and begins executing it. 

Researchers at Google Zero found that some extremely subtle timing differences in how a processor was executing instructions could provide insight into memory. Likewise, kernel memory sharing allowed some leakage of memory contents. Both of these could potentially be used by malware creators to gather protected information. 

There are three potential pathways for malware to gain system access . The most serious are through a browser and through the computer’s operating system. Closing off those pathways requires OS vendors and the browser developers to make changes to protect against these attacks. 

Microsoft has already released updates for Windows 10 that protect against both vulnerabilities. Updates for other Microsoft operating systems including Windows Server and Windows 7 will be sent out on Jan. 9, the normal Patch Tuesday. Updates for some versions of Linux are already available with other versions available soon. Apple has said that it’s MacOS and iOS devices are vulnerable and the company will be releasing updates soon, although an exact date is not available. 

Browser developers are already starting to send out updates. Firefox has already been updated; Microsoft has sent out updates for its Edge and Internet Explorer browsers. Google has said it will update the Chrome browser soon. 

The other pathway is through the processor itself. This requires microcode updates by reflashing the processor or by reflashing the computer’s BIOS as a way to bypass the problem. But when it comes to updating your hardware, you may find yourself in Update Hell. 

This is because you have to depend on the maker of the computer to provide the firmware updates required and whether you can get an update easily—or at all—depends on what company made your computer or server. 

I investigated updates to computers and servers from three vendors, Dell, Hewlett Packard and Lenovo. Where possible, I attempted to perform the necessary updates by downloading and flashing the relevant firmware or the BIOS. 

Lenovo made it easy. The company provides an update engine that’s included with its products—even old ones—that will find and download the files needed for the update. Then it will ask you when it’s OK to install them. The process is automated and fast. 

I don’t have an operational Dell machine in my office right now, but a search revealed Dell’s support pages for its client PCs and servers. This allows product users to search for your specific computer models. Next you will be referring to a link where you can download the updated firmware. While I didn’t try the updates for Dell’s full line of servers, it didn’t seem to be restrictions on what you can download. 

The situation is different with HP. First, the company has divided itself into two parts, HP and HPE (Hewlett Packard Enterprise). Servers and other enterprise hardware are handled by HPE while consumer and business computers such as laptops, desktops and workstations are handled by HP. 

Getting firmware updates from HP is fairly easy, but the company does not appear to have released any updates for these vulnerabilities. Some of the firmware downloads available on HP’s business computer site haven’t been updated for years. 

At HPE the firmware updates may be available, but unless you have a machine that’s under warranty or you have been paying HPE for a maintenance contract, you’re out of luck. The way you tell this is when you go to the download page for HPE servers, you’ll see the words “entitlement required” which means that if you can’t prove you’ve been paying for support, you don’t get the update. 

What makes things worse is even though HPE indicates that you may be able to pay a license fee for the update, there’s no apparent means of doing so and customer service personnel aren’t able to help. So if you have equipment from HPE, you’re on your own with one less than convenient recourse, which is to find another server vendor. 

You should note that not every computer with every processor is going to receive updates immediately. While Intel has released updates to the manufacturers, it’s up to them to turn that into a readily-accessible package you can use to flash your firmware and microcode. You can expect newer hardware to be available first. You need to keep checking and hope you get lucky.


Current Mitigation Patch Status
Linux distributions have started to distribute patches, but no distributions are yet fully patched.

Distributions that have released kernel updates with partial mitigation include:

CentOS 7: kernel 3.10.0-693.11.6
CentOS 6: kernel 2.6.32-696.18.7
Fedora 27: kernel 4.14.11-300
Fedora 26: kernel 4.14.11-200
Ubuntu 17.10: kernel 4.13.0-25-generic
Ubuntu 16.04: kernel 4.4.0-108-generic
Ubuntu 14.04: kernel 3.13.0-139-generic
Debian 9: kernel 4.9.0-5-amd64
Debian 8: kernel 3.16.0-5-amd64
Debian 7: kernel 3.2.0-5-amd64
Fedora 27 Atomic: kernel 4.14.11-300.fc27.x86_64
CoreOS: kernel 4.14.11-coreos

If your kernel is updated to at least the version corresponding to the above, some updates have been applied.

Operating systems that have not yet released kernels with mitigation include:

FreeBSD 11.x
FreeBSD 10.x

Ubuntu 17.04, which is reaching end of life on January 13, 2018 will not receive patches. Users are strongly encouraged to update or migrate.

Warning: We strongly recommend that you update or migrate off of any release that has reached end of life. These releases do not receive critical security updates for vulnerabilities like Meltdown and Spectre, which can put your systems and users at risk.

Because of the severity of this vulnerability, we recommend applying updates as they become available instead of waiting for a full patch set. This may require you to upgrade the kernel and reboot more than once in the coming days and weeks.

To update your servers, you need to update your system software once patches are available for your distribution. You can update by running your regular package manager to download the latest kernel version and then rebooting your server to switch over to the patched code.

For Ubuntu and Debian servers, you can update your system software by refreshing your local package index and then upgrading your system software:

sudo apt-get update
sudo apt-get dist-upgrade

For CentOS servers, you can download and install updated software by typing:

sudo yum update

For Fedora servers, use the dnf tool instead:

sudo dnf update

Regardless of the operating system, once the updates are applied, reboot your server to switch to the new kernel:

sudo reboot

Once the server is back online, log in and check the active kernel against the list above to ensure that your kernel has been upgraded. Check for new updates frequently to ensure that you receive further patches as they become available.


Conclusion

Spectre and Meltdown represent serious security vulnerabilities; the full potential of their possible impact is still developing.

To protect yourself, be vigilant in updating your operating system software as patches are released by vendors and continue to monitor communications related to the Meltdown and Spectre vulnerabilities.




Share:

How To Set Up Content Management System (CMS) using OctoberCMS on Ubuntu 16.04

This guide will show you how to install OctoberCMS on an Ubuntu 16.04 with Apache as the web server and MariaDB as database.
Share: