Tuesday, September 24, 2019

10 Ansible modules you need to know to automate everyday tasks

https://opensource.com/article/19/9/must-know-ansible-modules?utm_medium=Email&utm_campaign=weekly&sc_cid=7013a000002CxUyAAK

Ansible is an open source IT configuration management and automation platform. It uses human-readable YAML templates so users can program repetitive tasks to happen automatically without having to learn an advanced programming language. <<< More >>>

Monday, September 2, 2019

What is API?

What is API?
API stands for Application Programming Interface. An API is a software intermediary that allows two applications to talk to each other.  In other words, an API is the messenger that delivers your request to the provider that you’re requesting it from and then delivers the response back to you.
An API defines functionalities that are independent of their respective implementations, which allows those implementations and definitions to vary without compromising each other. Therefore, a good API, makes it easier to develop a program by providing the building blocks.
When developers create code, they don’t often start from scratch. APIs enable developers make repetitive yet complex processes highly reusable with a little bit of code. The speed that APIs enable developers to build out apps is crucial to the current pace of application development.
Developers are now much more productive than they were before when they had to write a lot of code from scratch. With an API they don’t have to reinvent the wheel every time they write a new program. Instead, they can focus on the unique proposition of their applications while outsourcing all of the commodity functionality to APIs.
The principle of API abstraction enables speed and agility
One of the chief advantages pf APIs is that they allow the abstraction of functionality between one system and another. An API endpoint decouples the consuming application from the infrastructure that provides a service. As long as the specification for what the service provider is delivering to the endpoint remains unchanged, the alterations to the infrastructure behind the endpoint should not be noticed by the applications that rely on that API.
Therefore, the service provider is given a great deal of flexibility when it comes to how its services are offered. For example, if the infrastructure behind the API involves physical servers at a data center, the service provider can easily switch to virtual servers that run in the cloud.
If the software running on those servers (such as credit card processing software) is written in, say, Java running on an Oracle-based Java application server, the service provider can migrate that to Node.js (server-side Javascript) running on Windows Azure.
The ability of API--led connectivity to allow systems to change as easily as plugging in a plug to a socket is key to the modern vision of enterprise IT. Gone are the days of messy point-to-point integrations for connecting enterprise solutions which take time and resources to maintain.
How do APIs work?
Imagine a waiter in a restaurant.  You, the customer, are sitting at the table with a menu of choices to order from, and the kitchen is the provider who will fulfill your order.
You need a link to communicate your order to the kitchen and then to deliver your food back to your table. It can’t be the chef because she’s cooking in the kitchen. You need something to connect the customer who’s ordering food and the chef who prepares it.  That’s where the waiter — or the API —  enters the picture.
The waiter takes your order, delivers it to the kitchen, telling the kitchen what to do. It then delivers the response, in this case, the food, back to you. Moreover, if the API is designed correctly hopefully, your order won’t crash!
 A real example of API
How are APIs used in the real world? Here’s a very common scenario of the API economy at work: booking a flight.
When you search for flights online, you have a menu of options to choose from. You choose a departure city and date, a return city and date, cabin class, and other variables like your meal, your seat, or baggage requests.
To book your flight, you need to interact with the airline’s website to access the airline’s database to see if any seats are available on those dates, and what the cost might be based on the date, flight time, route popularity, etc.
You need access to that information from the airline’s database, whether you’re interacting with it from the website or an online travel service that aggregates information from multiple airlines. Alternatively, you might be accessing the information from a mobile phone. In any case, you need to get the information, and so the application must interact with the airline’s API, giving it access to the airline’s data.
The API is the interface that, like your helpful waiter, runs and delivers the data from the application you’re using to the airline’s systems over the Internet. It also then takes the airline’s response to your request and delivers right back to the travel application you’re using. Moreover, through each step of the process, it facilitates the interaction between the application and the airline’s systems – from seat selection to payment and booking.
APIs do the same for all interactions between applications, data, and devices. They allow the transmission of data from system to system, creating connectivity. APIs provide a standard way of accessing any application data, or device, whether it’s accessing cloud applications like Salesforce, or shopping from your mobile phone.
Type of APIs
There are numerous types of APIs. For example, you may have heard of Java APIs, or interfaces within classes that let objects talk to each other in the Java programming language. Along with program-centric APIs, there are also Web APIs such as the Simple Object Access Protocol (SOAP), Remote Procedure Call (RPC), and perhaps the most popular—at least in name—Representational State Transfer (REST). There are 15,000 publicly available APIs, according to Programmable Web, and many thousands of more private APIs that companies use to expand their internal and external capabilities.

Tuesday, July 30, 2019

Importance of Learning Math


A programmer's regret: neglecting math at university

Math matters both more and less than you think…
Yes, you can ignore math and be a highly paid professional programmer. Programming is a wide enough field that you can choose which areas you want to focus on – some of which do not require math – and still be successful. On the other hand: Mathematics is the tool used to solve specialized problems, and Programming is doing mathematics. more >>>

Wednesday, July 17, 2019

Thursday, July 4, 2019

Excerpts Fom A Conversation on Learning and Future of Education


Following are excerpts  of a conversation between Bari Weiss, Op-Ed staff editor and writer at The New York Times and the New York Tines best selling author Yuval Noah Harasi,

From 47:23 to 1:02:36
BW: "A young person comes to you, about to enter university, What do you tell them to study and how do you tell them to spend their time?"

YNH: "I would first of all say that nobody has any idea how the job market would look like in 2050. Anybody who tells you that they know how the job market will be and what kind of skills will be needed. They are probably either deluded or mistaken whatever, so just start with the understanding that it is unknown most probably you will have to inventing yourself repeatedly throughout your career, not just the idea of job for life, but the idea of professional for life, this is outdated. If you want to stay in the game, you will have to reinvent yourself repeatedly and you don't know what kind of skills you actually need. So the best investment is to invest in emotional intelligence, and mental resilience, or mental balance, because may be the most difficult problems will actually be psychological."

BW: "Anxiety and Stress".

YNH: "It's so difficult to reinvent yourself, to learn new skills. It's difficult when you are 20. It's much much more difficult when you are 40, and to think that you have to do it again, learn everything when you are 50, and again when you are 60, because you have a longer life span and longer careers. Emotional intelligence and mental stability and n=mental balance, I think will be the most important assets. The problem is, it's the most difficult thing to teach or to study. You can't read a book about emotional intelligence and say ok, now I know and most teachers they themselves are the product of the old system which emphasized particular skills and not this ability to constantly learn and reinvent yorurself, and keep your mental balance. So we don't have a lot of teachers who are able to teach these things.

BW: "But do you think that humanities and the classics have a role to play in that they are concerned with the big questions about the meaning of life and how to live a good life or are those now irrelevant?"

YNH: "As I said in the beginning they are more relevant than ever before, in many practical ways, because a lot of questions are going to migrate from department of philosophy to department of engineering and department of economics and questions like what do you really want to do with your life, are going to be far more practical than ever before, given the emmense powers that tehnology is giving us and the ability to change yoru body, to change your brain, is going to put enormous philosophical challenges in front of everage person. You need to make the kind of decisions, that for most of the history or the stuff of the thought experiments by the philosophers. What would you do if you could be this and if you could be that. For most of us, you couldn't. It's impossible. Why do you care about it? But in 20 years, 50 years, maybe you can. So in this sense, philosophy and humanity in general are, maybe more important than ever before. "

Question from the audience: "You talked about the importance of needing to reinvent yourself on multiple occasions in the future, that much of what will be learnt in school and college will now largely be irrelevant. Given the rise of nano degrees and coding camps and school oppotunities that are short in targeting at specific jobs, people still seem to require a four year degree in this country and when politicians talk about the education, system, they are talking about making that free or not free, but I am wondering if you see any signs that the four year college degree is changing or are we stuck with that in anything that's going to be layering on top of that. It seems to be a lot of money, a lot of time, to invest in something that will not really last you as long as it used to last.

YNH: "I think the entire education system is facing a huge crisis and it's really the first system that faces this growing crisis, because it needs to comfort the future, when you think about what to teach today in school or college, you have to think in terms of 2040 and we don't have the answers so if you talk to experts in educational fields, almost all of them will tell you that the system is becoming almost irrelevant but what can  replace it, we just don't know. There are many experiments being done, they work, some of them quite well in small scale, but it's very difficult to scale it up. from the level of the experimental school to the elvel of an entire system with millions of teachers and tens of millions of students, I definitely don't have the answer. I don't think anybody at present have the answer. One of the problems is that we already have a system we don't start from scratch the inertia of the system is immense. You have all these buildings, you have all these teachers, you have all these bureaucrats. It's an immense system. I think this is the tip of the iceberg. Here we are encountering for the first tie this shock of the future world. It's too early to expect to have the answers. We hardly began the debate. in my impression is that the educational system to be relevant will have to switch from focusing on information and skills, and move to the direction on things like emotional intelligence or mental balance or learning how to learn and not learning a particular skill.

Monday, May 27, 2019

Glossary - Cybersecurity


Accessibility -

Breach - An incident that results in the confirmed disclosure—not just potential exposure—of data to an unauthorized party.

CIA-

Confidentiality -

Honeypot - A honeypot is a computer security mechanism set to detect, deflect, or, in some manner, counteract attempts at unauthorized use of information systems. Generally, a honeypot consists of data (for example, in a network site) that appears to be a legitimate part of the site, but is actually isolated and monitored, and that seems to contain information or a resource of value to attackers, who are then blocked. 

Incident - A security event that compromises the integrity, confidentiality or availability of an information asset.

Integrity -

Risk -

Threat -

Vulnerability -




Saturday, May 4, 2019

Data Breaches and Privacy

Hackers and Hacking

Cybersecurity Every Day


Digital Attack Map

Network Security 101


Insider Threat


Insider Threat Field Guide

Inside Threat Real World Lessons Learned

Insider Threat Cases (union employee sharing credentials)

Inside Insider Threats (human side and technology)

A Narrated Insider Threat Story

Inside Threat Program

Friday, April 12, 2019

Can Technology (Blockchain) Solve the Social Problem?



trust, complex systems, corporation, defection, Prisoner's Dilemma, trust
morals, reputation, institutional pressure, security systems

Sunday, February 17, 2019

List of Information Security Vulnerabilities

The Big List of Information Security Vulnerabilities

        posted by , June 27, 2016

Information security vulnerabilities are weaknesses that expose an organization to risk. Understanding your vulnerabilities is the first step to managing risk.

Employees

1. Social interaction 
2. Customer interaction 
3. Discussing work in public locations 
4. Taking data out of the office (paper, mobile phones, laptops) 
5. Emailing documents and data 
6. Mailing and faxing documents 
7. Installing unauthorized software and apps 
8. Removing or disabling security tools 
9. Letting unauthorized persons into the office (tailgating) 
10. Opening spam emails 
11. Connecting personal devices to company networks 
12. Writing down passwords and sensitive data 
13. Losing security devices such as id cards 
14. Lack of information security awareness 
15. Keying data 

Former Employees

1. Former employees working for competitors 
2. Former employees retaining company data 
3. Former employees discussing company matters 

Technology

1. Social networking 
2. File sharing 
3. Rapid technological changes 
4. Legacy systems 
5. Storing data on mobile devices such as mobile phones 
6. Internet browsers 

Hardware

1. Susceptibility to dust, heat and humidity 
2. Hardware design flaws 
3. Out of date hardware 
4. Misconfiguration of hardware 

Software

1. Insufficient testing 
2. Lack of audit trail 
3. Software bugs and design faults 
4. Unchecked user input 
5. Software that fails to consider human factors 
6. Software complexity (bloatware) 
7. Software as a service (relinquishing control of data) 
8. Software vendors that go out of business or change ownership 

Network

1. Unprotected network communications 
2. Open physical connections, IPs and ports 
3. Insecure network architecture 
4. Unused user ids 
5. Excessive privileges 
6. Unnecessary jobs and scripts executing 
7. Wifi networks 

IT Management

1. Insufficient IT capacity 
2. Missed security patches 
3. Insufficient incident and problem management 
4. Configuration errors and missed security notices 
5. System operation errors 
6. Lack of regular audits 
7. Improper waste disposal 
8. Insufficient change management 
9. Business process flaws 
10. Inadequate business rules 
11. Inadequate business controls 
12. Processes that fail to consider human factors 
13. Overconfidence in security audits 
14. Lack of risk analysis 
15. Rapid business change 
16. Inadequate continuity planning 
17. Lax recruiting processes 

Partners and Suppliers

1. Disruption of telecom services 
2. Disruption of utility services such as electric, gas, water 
3. Hardware failure 
4. Software failure 
5. Lost mail and courier packages 
6. Supply disruptions 
7. Sharing confidential data with partners and suppliers 

Customers

1. Customers access to secure areas 
2. Customer access to data (ie. customer portal) 

Offices and Data Centers

1. Sites that are prone to natural disasters such as earthquakes 
2. Locations that are politically unstable 
3. Locations subject to government spying 
4. Unreliable power sources 
5. High crime areas 
6. Multiple sites in the same geographical location


How a Music & a Biology major became a Security Hacker

Monday, February 11, 2019

Public Key Cryptography



PKI for busy people

Public-key infrastructure (PKI) is an umbrella term for everything that has to do with certificate and key management.
This is a quick overview of the important stuff.

Public-key cryptography

Public-key cryptography involves a key pair: a public key and a private key. Each entity has their own. The public key can be shared around, the private key is secret.
They allow doing two things:
  • Encrypt a message with the public key, decrypt it with the private key
  • Sign a message with the private key, verify it with the public key
Some common algorithms are RSA (used for both) and ECDSA (only for signatures).
In practice, public-key cryptography can be slow. That’s why nearly all protocols (such as TLS or SSH) only use it for authentication. Much faster symmetric-key algorithms (such as AES) are then used for encryption. This requires a shared secret, which is usually agreed upon using some flavor of Diffie-Hellman.

Hashing

Hashing algorithms (such as SHA) are one-way functions that take any input and compute a unique fixed-size output. The output is called a hash (or sometimes digest).

Signatures

Signatures authenticate messages. Here’s a rough simplification:
  • To sign a message, a code (the “signature”) is a calculated using the message and a private key
  • Using the public key and the original message, anyone can then verify the signature was indeed calculated from the message using the corresponding private key
Signing the whole message is pretty inefficient, so its hash is signed instead. That’s why you’ll see signature algorithms with descriptions like “ECDSA Signature with SHA-256.”

Certificates

A certificate is a name and public key bound by a signature. It identifies the owner of a public key.
The signee is called a certificate authority (CA). The CA is often some big company, like GeoTrust or Let’s Encrypt. With internal PKI, it can be any entity that nodes have been configured to trust.
A CA’s certificate can be signed by another CA, and so on. The last certificate in the chain is called a root certificate. Root certificates are trusted and stored locally. They’re usually shipped along browsers and the OS.

Formats

Most often when people talk about certificates, they refer to X.509. It’s a flexible format for representing certificates. X.509 is used by TLS, which is used by a lot of things, like HTTPS and Kubernetes.
X.509 certificates are written in the ASN.1 notation. The ASN.1 is usually serialized into DER. Since binary data can be a pain to transmit, it’s often further encoded into PEM. PEM is essentially just Base64-encoded DER.

Verification

Certificate verification consists of making sure the certificate chain is valid and leads to a trusted root certificate.
Of course, it assumes we trust the CAs, safe in the knowledge that they conform to sane security practices and only issue certificates to verified entities.

Bundling

Since verification requires the complete chain, certificates are often distributed as a bundle. In the case of TLS, the chain is sent during the handshake.
Usually PEM files are just concatenated into one.
Certificates can also be bundled using PKCS #12 (also known as PFX) or PKCS #7. The main difference is PKCS #12 can store private keys.

Issuance

When applying for a certificate:
  1. The client sends a certificate signing request (CSR) to the CA. It includes the client’s public key and a bunch of distinguished name attributes (such as country and domain name)
  2. If everything looks good, the CA generates a certificate from the CSR
In the simplest case, the CA just performs Domain Validation (DV). It’s usually fast and automated, like checking for some specific DNS record.
For more thorough vetting, there’s also Organization Validation (OV) and Extended Validation (EV). OV implies DV and verifying ownership of the legal entity. EV is the slowest and most rigorous of all, based on CA/Browser Forum guidelines. EV certificates are usually displayed prominently (for example, on Safari the URL will be green).
For internal PKI, you can do whatever works best. With Kubernetes, you might send certificates to the nodes manually, or automate client CSRs and signing.

Revocation

There’s basically two ways to revoke certificates: certificate revocation lists (CRLs) and OCSP. A CRL is just a big list of certificates revoked by the CA. OCSP is a protocol that allows inquiring about a specific certificate.
Both have their flaws. They add overhead. A lot of software don’t care. It might be easier to just use short-lived certificates and make issuance super smooth and simple.

Summary

  • With someone’s public key, we can verify their signatures and send them encrypted messages
  • With our private key, we can sign messages and decrypt messages sent to us
  • Certificates identify public key owners
  • We trust a certificate because we trust the CA that signed it
  • We trust the CA because Apple/Google/Microsoft/whoever added the CA’s certificate on the server/etc. trusts them
February 11, 2019