Quantcast
Channel: Cloud Training Program
Viewing all 1898 articles
Browse latest View live

Container Engine For Kubernetes (OKE) In Oracle Cloud (OCI)

$
0
0

Kubernetes cluster is used to deploy a containerized application on Cloud. One of the ways to deploy a containerized application in the form of Kubernetes Cluster on OCI is Container Engine for Kubernetes.

Kubernetes Adoption in Cloud:

  • AWS – Elastic Kubernetes Service (EKS)
  • Microsoft Azure – Azure Kubernetes Service (AKS)
  • Google Cloud Platform – Google Kubernetes Engine (GKE)
  • Oracle Cloud – Oracle Kubernetes Engine (OKE)
  • Digital Ocean – Digital Ocean Kubernetes Service (DOKS)

Key Concepts Of Kubernetes

1) Kubernetes cluster: Is a group of nodes and nodes are the machine that runs the application. A node can be a physical machine or virtual machine.

2) Container: A container is a run time instance of a docker image that contains three things docker image, execution environment, and a standard set of instructions. There are different types of containers like Linux Container, Rocket, Mesos Containers, and Docker.

3) Type of nodes and their processes

  • Master Node: 
    • kube-apiserver: supports API operations using Kubernetes Command Line tool (kubctl)
    • kube-controller-manager: to manage kubernetes components (like: replication controller, endpoint controller, namespace controller)
    • kube-scheduler: where in the cluster to run jobs
    • etcd: store clusters configuration data
  • Worker Node
    • kubelet: to communicate with master node
    • kube-proxy: handles networking

components of kubernetes

4) Pods: A worker node comprises various containers, Kubernetes groups the containers in a single logical unit called pods. Pods specify the process running in the Cluster. Similar functioning pods can be grouped together called a service.

pods

5) Manifest Files/ Pod specs: These files are in json or yaml format that specifies how to deploy applications on node or nodes.

6) Node pools: It enables to create pools of machines in a cluster with different configurations like one Node pool for virtual machines and one for bare-metal machines. A cluster must have at least one node pool and it need not contain any worker node.

To read more about Container Engine for Kubernetes click here.

Ways To Launch Kubernetes On Oracle

There are basically three methods to run Kubernetes on OCI

  1. Roll-your-own Container Management: Using OCI component to create a Kubernetes Cluster and deploying container runtime like Docker, Kubernetes, Mesos. (Do it yourself model)
  2. Quickstart Experience: Automated model with terraform to build components of Kubernetes cluster (terrform on github).
  3. Container Engine for Kubernetes: OKE is the manageable service in OCI used for deploying Kubernetes cluster within a few steps.

container engine for kubernetes (OKE)

Container Engine for Kubernetes is a highly available and manageable service in Oracle Cloud Infrastructure (OCI). It is used to deploy Cloud Native applications in OCI. We can create an application using Docker Containers and then deploy them on OCI using Kubernetes. Kubernetes groups the application containers into logical units called pods. We can manage the Container Engine for Kubernetes using API or Console.

We can create by default three clusters (Monthly flex costing model) or one cluster (pay as you go costing model). In each cluster, we can have a maximum of 1000 nodes, a maximum of 110 pods can run on each node.

Why Use Kubernetes

why use kubernetes

  • Traditional deployment was costlier as we need to have a different physical server for the deployment of each application otherwise if we deploy multiple applications on each server than there will be a problem of resource allocation to each application.
  • Virtualized Deployment is the solution to the above problem, in this, we can host multiple virtual machines on a physical server & will provide isolation to the applications deployed on each Virtual Machine. It also provides better utilization of resources.
  • Container Deployment is similar to VMs but shares the OS among different applications, therefore, it is considered as lightweight. Containers are a good way to integrate and run your applications. For eg: if a container goes down then another container should start working and this process should be handled by a system.

Access Kubernetes On Oracle Cloud

1) Creating a Kubernetes Cluster: We can create a kubernetes cluster in OCI using various methods like

  • Using console (quick cluster)
  • Custom setting in console (custom cluster)
  • Using API

To know about the steps to create the Kubernetes cluster click here.

2)  Modify:  We can modify the already created cluster node details.

3) Delete Cluster: We can delete cluster when it is of no use along with master node, worker nodes, and node pools.

4) Monitoring Cluster: We can monitor the cluster and associated nodes for its overall status. Cluster status can be creating, active, failed, deleting, deleted, and updating.

5) Accessing Cluster: We can use kubctl command-line tool to access and perform operations on the cluster.

Benefits Of Using Oracle Kubernetes (OKE)

  1. Easy to build & maintain applications and economics.
  2. Easy to integrate Kubernetes with registry using OKE.
  3. Feasible for developers to deploy and manage container applications on cloud.
  4. Combine open-source container orchestration of Kubernetes with oracle services like control, IAM, security, etc.

Reference/Related Post

Join FREE Masterclass

To know about what is the Roles and Responsibilities of Kubernetes administrator, why you should learn Docker and KubernetesJob opportunities for Kubernetes administrator in the market, and what to study Including Hands-On labs you must perform to clear CKA certification Exam by registering for our FREE Masterclass.

Click on the below image to Register Our FREE Masterclass Now!Free Masterclass

The post Container Engine For Kubernetes (OKE) In Oracle Cloud (OCI) appeared first on Oracle Trainings.


[AZ-301] Microsoft Azure Architect Design Certification Exam: Everything You Need To Know

$
0
0

This blog-post will cover everything you need to know about the [AZ-301] Microsoft Azure Architect Design certification exam.

You must be having many questions like, why is this certification important? What domains does it cover? What are the eligibility criteria? How to prepare for it? And whatnot.

What Is Azure Architect Design Certification?

The AZ-300 Microsoft Azure Certification is geared towards those who advise stakeholders and translate business requirements into secure, scalable, and reliable solutions.

Why Should You Learn Azure?

  • Validates technical skills like storage, networking, compute, security, and other Cloud operations on Microsoft Azure.
  • Top-paying info-tech certification in the world.
  •  It provides you with global recognition for your knowledge, skills, and experience.
  • The organization looks for those who know Oracle Cloud, AWS, Azure, etc.

Why Azure Certification Is Beneficial?

  • As a result of the increase in demand for Azure, the need for Azure administrators is rapidly increasing along the lines. Hence, one CV with this gleaming certification will have an enormous advantage.
  • In terms of job prospects and earning, a certification leads to a rampant increment in both.
  • Almost 70% of people agree that certification has improved their earning and 84% of people have seen better job prospects after getting certified.
  • Updating your LinkedIn profile with this certificate will boost your job profile and increase your chances of getting chosen.

Exam Details (AZ-301)

  • Certification Name: [AZ-301] Microsoft Azure Architect Design
  • Prerequisites: There are no prerequisites for taking this course. Microsoft recommends candidates to have a minimum of six months of hands-on experience administering Azure.
  • Exam Cost: USD 165.00

Azure Architect Official image

AZ-301 Exam Topics

Candidates should have advanced experience and knowledge across various aspects of IT operations, including networking, virtualization, identity, security, business continuity, disaster recovery, data management, budgeting, and governance. This role requires managing how decisions in each area affects an overall solution. Candidates must be proficient in Azure administration, Azure development, and DevOps, and have expert-level skills in at least one of those domains.

Azure Architect design learning Path

The important domains covered in the [AZ-301] Microsoft Azure Architect Design certification exam are:

  • Determine workload requirements (10-15%)
  • Design for identity and security (20-25%)
  • Design a data platform solution (15-20%)
  • Design a business continuity strategy (15-20%)
  • Design for deployment, migration, and integration (10-15%)
  • Design an infrastructure strategy (15-20%)

Exam Retake Policy

  • If a candidate fails the exam in his/her’s first attempt, they have to wait for a period of 24 hours before reapplying for the exam.
  • In case the candidate’s second attempt also fails they should re-access their training and retake the exam after a waiting period of 14 days.
  • finally, a candidate has a maximum of 5 retakes allowed in a year.

Who This Certification Is For?

Anyone looking to gain the Microsoft Certified: Azure Solutions Architect Expert Certification needs to complete this [AZ-301] Microsoft Azure Architect Design certification exam and the [AZ-300] Microsoft Azure Architect Technologies certification exam.

Azure Architect Requirements

Next Task For You

Interested in other Microsoft Azure Certifications as well? Check out this blog post to know all about the [AZ-104] Microsoft Azure Certification Exam. Also, check out this blog to know everything about the [AZ-900] Azure Fundamentals Certification Exam.

Click on the Join Waitlist now button below to join the waitlist of our much-awaited AZ-300 Certification Training which will help you clear the exam with flying colors.

Masterclass Enrollment

The post [AZ-301] Microsoft Azure Architect Design Certification Exam: Everything You Need To Know appeared first on Oracle Trainings.

Integration b/w Oracle Engagement Cloud, Oracle CPQ & Oracle EBS

$
0
0

We have been getting a lot of queries from our Oracle Integration Cloud OIC trainees on Use case how to create an integration so that opportunity can be converted into a quote and then into order.

In this blog, I am going to show step by step process on how to use Oracle Integration cloud OIC to integrate Oracle Sales Cloud, Oracle CPQ (Configure, Price, Quote), and Oracle E-Business Suite so that data is synchronized in real-time.

Use Case

1. Opportunity Comes through Oracle Engagement Cloud and sent to Oracle CPQ Cloud.
2. Opportunity is received and converted into quote through Oracle CPQ Cloud and sent to Oracle E-Business Suite.
3. Then the quote is converted into Order in Oracle E-Business Suite.
4. When the integration is completed the data should be synchronized in real-time between Oracle Engagement cloud (Sales Cloud), Oracle CPQ (Configure, Price, Quote), and Oracle E-Business Suite.

It is necessary that the systems to utilize one set of data and automatically transfer data between systems, this requires the systems to be integrated so that data could flow between them in a bidirectional manner.

For this scenario, we are choosing Oracle Integration cloud OIC as it offers bidirectional data synchronization. Oracle Integration will be used for bidirectional data synchronization between Oracle Engagement cloud (Sales Cloud), Oracle CPQ Cloud, and Oracle E-Business Suite.

What is Oracle Integration Cloud (OIC)?

Oracle Integration Cloud (OIC) is a cloud-based integration application designed to perform integrations between SaaS to SaaS SaaS to On-Premise and vice versa.
To learn more check our beginner blog Oracle Integration Cloud (OIC) For Beginners Overview

integration cloud

Why choose OIC?

  • Oracle Integration cloud OIC also offers pre-built adapters and integration this reduces the effort and time required to creating the integration.
  • OIC allows for the integration to be created without any code, XML Schema, XSLT or any other artefact.
  • OIC allows for real-time monitoring of all the Integrations with key performance matrix.
  • Provide error management and troubleshoot error messages from one place.

What is Oracle Configure, Price, and Quote (CPQ)?

oracle cpq cloudOracle’s Configure, Price, and Quote (CPQ) provides a cloud-based system that offers extreme ease of use and configurability. It automates the sales order process and allows sales personnel to configure and price complex products and promotions.

Oracle CPQ Cloud platform helps businesses to accurately configure and quote their services and products and effectively engage with customers. It helped the company reduce average quote processing time from 5 to 7 days to less than one hour additionally increasing the revenue and business.

What is Oracle Engagement Cloud?

oracle Engagement cloud
Engagement Cloud (formerly known as Oracle Sales Cloud) combines sales and service capabilities in one solution with a unique combination of sales automation, service request management, knowledge management, and digital customer service.

 

What is Oracle E-Business Suite (EBS)?

Oracle E-Business Suite EBS ERP

Oracle EBS, it is an integrated set of business applications which users can implement into their own businesses.

It includes Oracle CRM, Oracle Financials, Oracle Human Resource Management System (HRMS), Oracle Logistics, Oracle Supply Chain Applications, Oracle Order Management, Oracle Transportation Management, and Oracle Warehouse Management System. Learn more about Oracle EBS
[/vc_column]

Achieving the Use Case with OIC

Step 1: Create an Integration to Get Opportunity Details from Engagement Cloud to the CPQ Cloud.

  1. For the integration, we will choose the prebuilt integration Oracle Sales Cloud to Oracle CPQ Cloud Integration | OIC Recipe so that the Opportunity details can flow between Oracle Engagement cloud to the CPQ cloud.
    Oracle Integration cloud OIC prebuilt Integration
  2. Complete the connections for Oracle CPQ and Sales Cloud. by providing appropriate connection details.
  3. Prebuilt integration Oracle Sales Cloud to Oracle CPQ Cloud Integration provides 4 integrations.
    Oracle Integration cloud Engagement cloud (Sales Cloud), Oracle CPQ Cloud, and Oracle E-Business Suite
  4. In the Opportunity Import Integration, Connection to CPQ and sales cloud is set as the source and target connection.
  5. Now Check the mapping, provide the business identifier in the tracking field and Save the Integration.
  6. Activate Opportunity Import and Quote Upsert Integration.

Step 2: Creating a New Integration to Enable the Creation of Sales Order in Oracle EBS.

  1. Create integration between Oracle CPQ and Oracle EBS for converting quotes to the sales order.
  2. Select CPQ cloud connection at source and provide Name,  service definition method details and service details.
  3. Select EBS connection at target and provide Name, Product Family Package and Procedure.
  4. Create Data mapping between these two endpoints.
  5. You can also use the Recommended mapping engine to automatically map the required source and target fields.
  6. Provide the business identifier in the tracking field and Save the Integration.
  7. Activate the Integration.
  8. Now messages can flow between Oracle CPQ to create a sales order when an opportunity is closed.

Congratulations you have successfully create integration and now the data is synchronized between Oracle Sales Cloud, Oracle CPQ (Configure, Price, Quote), and Oracle E-Business Suite in real-time.

Note: If you want to enhance your knowledge and become a certified Oracle Cloud Platform Application Integration 2019 Associate then check on the blog on certification exam [1Z0-1042].

Related/References

Next Task For you

Begin your journey towards becoming an Oracle [1Z0-1042] Certified Cloud Integration Expert by joining our FREE Masterclass.

Click on the below image to Register for the FREE MASTERCLASS Now!Oracle API Platform Cloud Service Overview

The post Integration b/w Oracle Engagement Cloud, Oracle CPQ & Oracle EBS appeared first on Oracle Trainings.

Docker Architecture | Docker Engine Components | Container Lifecycle

$
0
0

This post is the second video of our five-part video series on “Docker & Kubernetes”.

In this video blog, we are covering the Architecture & Components of Docker and Container lifecycle.

Note: If you have missed my previous post on “Docker vs Virtual Machine”, to check previous post click here https://k21academy.com/docker12

<Video>

Docker Architecture & Components

Docker uses a client-server architecture. The docker client talks to the Docker daemon, which used to building, running, and distributing the Docker containers. The Docker client and daemon communicate using a REST API, over UNIX sockets, or a network interface.

Docker Architecture

There are five major components in the Docker architecture:

a) Docker Daemon listens to Docker API requests and manages Docker objects such as images, containers, networks, and volumes.

b) Docker Clients: With the help of Docker Clientsusers can interact with Docker. Docker client provides a command-line interface (CLI) that allows users to run, and stop application commands to a Docker daemon.

c) Docker Host provides a complete environment to execute and run applications. It comprises of the Docker daemon, Images, Containers, Networks, and Storage.

d) Docker Registry stores Docker images. Docker Hub is a public registry that anyone can use, and Docker is configured to use images on Docker Hub by default. You can run your own registry on it.

e) Docker Image read-only templates that you build from a set of instructions written in Dockerfile. Images define both what you want your packaged application and its dependencies to look like what processes to run when it’s launched.

Note: To know more about Docker image in detail click here

Resources Isolation In Container (Docker)

a) Namespace provides a layer of isolation. namespace limits what you can see. When we run a container, Docker creates a set of namespaces for that container. There are different types of namespace pid, net, mnt, uts, ipc.

b) Control groups limit an application to a specific set of resources. it limits how much resources you can use. This allows Docker Engine to share available hardware resources to containers and optionally enforce limits and constraints.

c) Union file systems that operate by creating layers, Docker image is made up of filesystems layered over each other making it very lightweight and fast.
If you didn’t have UnionFS, a 200MB image runs 5 times as 5 separates containers would mean 1GB of disk space.

Docker Engine Components

Docker Engine is the layer on which Docker runs. It is installed on the host machine. It’s a lightweight runtime and tooling that manages containers, images, builds, and more.

Docker Engine Components

There are three components in the Docker Engine:

a) Server: It is the docker daemon called dockerd. It can create and manage docker images. i.e. Containers, networks.

b) Rest API: It is used to instruct docker daemon what to do.

c) Command Line Interface (CLI): It is a client that is used to enter docker commands.

Docker Networking & Storage

Docker Networking & Storage

Networking in Docker is part of docker which is used to connect the docker container to each other and outside world so they can communicate with each other also they can talk to Docker Host. you can connect docker containers to non-Docker workloads. Docker uses CNM Container Network Model for networking.

Note: To know more about Docker networking in detail click here

Docker Storage By default, all files created inside a container are stored on a writable container layer so the data doesn’t persist when that container no longer exists. Docker has two options for containers to store files in the host machine, so that the files are persisted even after the container stops: volumes, and bind mounts.

Container Lifecycle

container Lifecycle

There are different stages when we create a container which is known as Lifecycle of container i.e create, run, pause, delete & stopped.

  • The first phase is the created state. Further, the container moves into the running state while we use the Docker run command.
  • we can stop or pause the container, using Docker stop/pause command. And, to put a container back from a stopped state to a running state, we use the Docker run command.
  • We can delete a running or stopped container, using rm command.

Related Post

Join FREE Masterclass

To know about what is the Roles and Responsibilities of Kubernetes administrator, why you should learn Docker and KubernetesJob opportunities for Kubernetes administrator in the market, and what to study Including Hands-On labs you must perform to clear CKA certification Exam by registering for our FREE Masterclass.

Click on the below image to Register Our FREE Masterclass Now!Free Masterclass

The post Docker Architecture | Docker Engine Components | Container Lifecycle appeared first on Oracle Trainings.

[AZ-900] Microsoft Azure Security Services: Security Center, Key Vault, AIP & ATP

$
0
0

This blog post is the thirteenth blog  Microsoft Azure Fundamentals Certification Series(AZ-900) of Topic 3: Security Services.

If you have not gone through the previous Topic 3.2 Microsoft Azure Core Identity Services: AD & MFA read it at https://k21academy.com/az90023.

For the full list of blogs in this series, refer to https://k21academy.com/az90011

In this blog post, we’ll cover Topic 3.3 Microsoft Azure Security Services which includes Azure Security Center, Azure Key Vault, Azure Information Protection(AIP), and Azure Advanced Threat Protection(ATP).

Microsoft Azure provides tools that are needed to enhance the network, secure services, and provide security at every level possible.

Azure Security Center

  1. Azure Security Center provides tools and services across hybrid cloud and on-premise workload to make the cloud more secure.
  2. It is a unified infrastructure security management system
  3. It strengthens the security posture, protect against threats by assessing the workloads and raising security alerts and secure faster by natively integrating and auto-provisioning Azure security services.

Azure Key Vault

  1. Azure Key Vault is a cloud service that provides a secure store for secrets. It is a logical group of secrets.
  2. It helps you securely store classified information such as keys, passwords, certificates, and other secrets.

Azure Key Vault

Azure Information Protection(AIP)

  1. Azure Information Protection(AIP) helps the customer to classify, protect documents and emails by applying labels.
  2. Labels can be applied automatically by administrators, manually by users, or by a combination of users.

Azure Information Protection

Azure Advanced Threat Protection(ATP)

  1. Azure ATP is a security service that leverages on-premises Active Directory signals.
  2. It monitors users, entity behavior, and activities with learning-based analytics
  3. It protects user identities and credentials stored in Active Directory
  4. Identify & investigate suspicious user activities and advanced attacks
  5. Provide clear incident information on a simple timeline

Azure ATP

Sample Questions

Here are a few sample questions from the Microsoft Azure Fundamentals Certification Exam[AZ-900] that you should be able to solve after reading this blog.

Q 1: Which Azure service should you use to store certificates?

A. Azure Security Center
B. an Azure Storage account
C. Azure Key Vault
D. Azure Information Protection
Correct Answer: C
Explanation: Azure Key Vault securely stores classified information such as keys, passwords, and certificates.

Q 2: Your company plans to automate the deployment of servers to Azure. Your manager is concerned that you may expose administrative credentials during the deployment. You need to recommend an Azure solution that encrypts the administrative credentials during the deployment. What should you include in the recommendation?
A. Azure Key Vault
B. Azure Information Protection
C. Azure Security Center
D. Azure Multi-Factor Authentication (MFA)

Correct Answer: A

Related/References

  1. [AZ-900] Microsoft Azure Certification Fundamental Exam: Everything You Must Know
  2. Learn how to create a Free Microsoft Azure Trial Account
  3. [AZ-900] Microsoft Azure Fundamentals: Topic 1.1 Overview & Benefits 
  4. Topic 2.1 Azure Architecture: Region, Availability Zone & Geography
  5. How to Register For [AZ-900] Microsoft Azure Fundamentals Certification Exam
  6. Topic 3.1 Microsoft Azure Secure Network Connectivity: Firewall, DDOS, & NSG
  7. Topic 3.2 Microsoft Azure Core Identity Services: AD & MFA 

What’s Next?

Begin your journey towards Azure, Getting [AZ-900] Microsoft Azure Fundamentals certified, and earning a lot more in 2020 by joining our FREE Masterclass.

Click on the below image to Register for the FREE MASTERCLASS Now!

(AZ-900) Free Masterclass

The post [AZ-900] Microsoft Azure Security Services: Security Center, Key Vault, AIP & ATP appeared first on Oracle Trainings.

[AZ-900] Microsoft Azure Governance: Azure Blueprints & Azure Policy

$
0
0

This blog post is the fourteenth blog  Microsoft Azure Fundamentals Certification Series(AZ-900) of Topic 3: Security Services.

If you have not gone through the previous Topic 3.3 Microsoft Azure Security Services read it at https://k21academy.com/az90024.

For the full list of blogs in this series, refer to https://k21academy.com/az90011.

In this blog post, we’ll cover Topic 3.4 Microsoft Azure Governance which includes Azure Blueprints & Azure Policy.

Microsoft Azure provides governance features and services in order to implement policy-based management for all Azure services available on-cloud and on-premise. The two most prominent services are:

  1. Azure Blueprints
  2. Azure Policy

Azure Governance

Azure Blueprints

  1. Azure Blueprints like architectural blueprints, define Azure resources that implement an organization’s standards, patterns, and requirements.
  2. By leveraging Azure Blueprints, engineers can quickly build and deploy new environments.
  3. Azure Blueprints provides a mechanism that allows you to create and update artifacts (like policies, RBAC, resource group, ARM templates) and assign them to environments and version them.

RBAC is Azure’s role-based access control, a system that provides access management of Azure resources. Using Azure RBAC, one can segregate duties within the team and grant only the amount of access to users that they need to perform their role.

Azure Blueprints

Azure Policy

  1. Azure Policy is a service that you use to create, assign, and manage policies.
  2. These policies enforce rules on resources so those resources stay compliant with your corporate standards and service-level agreements.
  3. Policies enforce tagging for resources and resource groups and restrict regions for deployed resources.

Azure Role_Based_Access_Control

Sample Questions

Here are a few sample questions from the Microsoft Azure Fundamentals Certification Exam[AZ-900] that you should be able to solve after reading this blog.

Q 1: You have a resource group named RG1. You plan to create virtual networks and app services in RG1.  You need to prevent the creation of virtual machines only in RG1.  What should you use?
A. a lock
B. an Azure role
C. a tag
D. an Azure policy
Correct Answer: A
References:https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-lock-resources

Q 2. Your company has an Azure environment that contains resources in several regions. A company policy states that administrators must only be allowed to create additional Azure resources in a region in the country where their office is located.  You need to create the Azure resource that must be used to meet the policy requirement.  What should you create?
A. a read-only lock
B. an Azure policy
C. a management group
D. a reservation
Correct Answer: B

Related/References

  1. [AZ-900] Microsoft Azure Certification Fundamental Exam: Everything You Must Know
  2. Learn how to create a Free Microsoft Azure Trial Account
  3. [AZ-900] Microsoft Azure Fundamentals: Topic 1.1 Overview & Benefits 
  4. Topic 2.1 Azure Architecture: Region, Availability Zone & Geography
  5. How to Register For [AZ-900] Microsoft Azure Fundamentals Certification Exam
  6. Topic 3.1 Microsoft Azure Secure Network Connectivity: Firewall, DDOS, & NSG
  7. Topic 3.2 Microsoft Azure Core Identity Services: AD & MFA 
  8. Topic 3.3 Microsoft Azure Security Services: Security Center, Key Vault, AIP & ATP

What’s Next?

Begin your journey towards Azure, Getting [AZ-900] Microsoft Azure Fundamentals certified, and earning a lot more in 2020 by joining our FREE Masterclass.

Click on the below image to Register for the FREE MASTERCLASS Now!

(AZ-900) Free Masterclass

The post [AZ-900] Microsoft Azure Governance: Azure Blueprints & Azure Policy appeared first on Oracle Trainings.

[AZ-900] Microsoft Azure Monitoring & Reporting: Cloud Monitor & Service Health

$
0
0

This is the fifteenth blog in the Microsoft Azure Fundamentals Certification Series(AZ-900) of Topic 3: Azure Cloud Security.

If you have not gone through the previous topic 3.4 Microsoft Azure Governance read it here https://k21academy.com/az90025.

For the full list of blogs in this series, refer to https://k21academy.com/az90011.

This blog covers the topic 3.5 Azure Cloud Security: Monitoring and Reporting which includes Azure Cloud Monitor & Azure Cloud Service Health.

Monitoring is one of the key aspects of any IT environment, and proper reporting of events can lead to an on-time response to any problems that may arise.

Azure Cloud Monitor

  1. Azure cloud monitor is an extensive solution for the collection, analysis, and action on the telemetry data it collects from the enterprises’ cloud and on-premises environment.
  2. It also enables us to figure out how well the applications are performing and identifies issues affecting them and the resources they are dependent on.
  3. Listed below are some of the key points about the service:
  4. The Azure Monitor has insights from all the Azure cloud solutions which lead to a very handy collection of user activity logs, diagnostic logs, storage logs, and compute logs. Hence making sure no activity or error goes unnoticed.
  5. A cloud environment by nature is highly distributed, meaning that at any single moment there can be multiple sources generating telemetry data. The Azure Monitor’s ability to aggregate and process that data allows the user to create alerts for issues with potential business impacts in the Azure environment.

Azure Monitor

Azure Cloud Service Health

  1. The Azure Cloud provides a suite of services that keep you informed about the health of all your cloud resources in the form of Azure Service Health.
  2. The information that is provided includes things such as service impacting events, planned maintenanceand various other changes that may affect the availability of your cloud resources.
  3. The Azure Service Health is a combined suite of three smaller services listed below:
  • Azure service health: the main module that provides you with personalized information on the services and regions being used by you.
  • Azure Status: provides information on service outages, in case of a service outage from Azure’s side this can help an enterprise to quickly migrate its resources temporarily to non-impacted regions or plan its cloud deployments keeping previous service outages in mind.
  • Azure resource health: provides information regarding the health of your individual resources such as a specific virtual machine instance.

Azure Service health

Sample Questions

Here is a sample question from the Microsoft Azure Fundamentals Certification Exam[AZ-900] that you should be able to solve after reading this blog.

Q1. Which of the following statements are true?

  1. From Azure Service Health, an administrator can view the health of all the services deployed to an Azure environment and all the other services available in Azure.
  2. From Azure Service Health, an administrator can create a rule to be alerted if an Azure service fails.
  3. From Azure Service Health, an administrator can prevent a service failure from affecting a specific virtual machine.

Correct Answer: A and B

Explanation: The Azure Service Health only provides fault monitoring, any preventative actions to be taken are performed from their respective dashboards.

Q2. You have a virtual machine named VM1 that runs Windows Server 2016. VM1 is in the East US Azure region. Which Azure service should you use from the Azure portal to view service failure notifications that can affect the availability of VM1?

  1. Azure Service Fabric
  2. Azure Monitor
  3. Azure virtual machines
  4. Azure Advisor

Correct Answer: B

Related/Reference

  1. [AZ-900] Microsoft Azure Certification Fundamental Exam: Everything You Must Know
  2. Learn how to create a Free Microsoft Azure Trial Account
  3. [AZ-900] Microsoft Azure Fundamentals: Topic 1.1 Overview & Benefits 
  4. Topic 2.1 Azure Architecture: Region, Availability Zone & Geography
  5. How to Register For [AZ-900] Microsoft Azure Fundamentals Certification Exam
  6. Topic 3.1 Microsoft Azure Secure Network Connectivity: Firewall, DDOS, & NSG
  7. Topic 3.2 Microsoft Azure Core Identity Services: AD & MFA 
  8. Topic 3.3 Microsoft Azure Security Services: Security Center, Key Vault, AIP & ATP
  9. Topic 3.4 Microsoft Azure Governance: Azure Blueprints & Azure Policy

What’s Next?

Check out the official certification page and start your learning towards Azure Cloud, Get certified, and earn a lot more in 2020 by joining our FREE Masterclass.

Register for the FREE MASTERCLASS Now by clicking on the link below!

AZ900 masterclass

The post [AZ-900] Microsoft Azure Monitoring & Reporting: Cloud Monitor & Service Health appeared first on Oracle Trainings.

Microsoft Azure Solutions Expert: AZ-300 vs AZ-301

$
0
0

Planning to be a Microsoft Certified: Azure Solutions Architect Expert but confused about the certification process? In this blog, I will be covering all the differences between the two exams AZ-300: Microsoft Azure Architect Technologies and AZ-301: Microsoft Azure Architect Design along with the recommended path you should follow to crack this certification.

Starting with the most important detail about these exams, both exams need to be cleared to attain the Azure Solutions Architect Expert certification and are not interchangeable with each other. The topics for the AZ-300 exam are geared more towards technical proficiency in the Azure cloud with a more hands-on approach whereas, the AZ-301 exam is geared more towards designing solutions and strategies related to various Azure services and has a more theoretical approach.

AZ-300: Architect Technologies Exam Details

  • Certification Name: [AZ-300] Microsoft Azure Architect Technologies
  • Prerequisites: There are no prerequisites for taking this course. Microsoft recommends candidates to have a minimum of six months of hands-on experience administering Azure.
  • Exam Cost: USD 165.00

Candidates for this exam should have subject matter expertise in designing and implementing solutions that run on Microsoft Azure, including aspects like compute, network, storage, and security.

Although Microsoft does not list any prerequisites for this exam, due to the highly technical nature of this AZ-300 exam candidates are advised to go through the [AZ-900] Microsoft Azure Fundamentals and the [Az-104] Microsoft Administrator certification exams.

AZ-300: Exam Topics

Microsoft AZ 300 syllabus

The important domains covered in the [AZ-300] Microsoft Azure Architect Technologies certification exam are:

  • Deploy and configure infrastructure (40-45%)
  • Implement workloads and security (25-30%)
  • Create and deploy apps (5-10%)
  • Implement authentication and secure data (5-10%)
  • Develop for the cloud and for Azure storage (15-20%)

As we can see here, The topics for the AZ-300 exam are geared more towards technical proficiency in the Azure cloud with a more hands-on approach and includes labs to prepare the candidates for real-world scenarios.

AZ-301: Architect Design Exam Details

  • Certification Name: [AZ-301] Microsoft Azure Architect Design
  • Prerequisites: There are no prerequisites for taking this course. Candidates are recommended to have a minimum of six months of hands-on experience administering Azure.
  • Exam Cost: USD 165.00

Candidates for this exam are Azure Solution Architects who advise stakeholders and translate business requirements into secure, scalable, and reliable solutions.

Similar to the AZ-300, Microsoft does not list any prerequisites for this exam but being an Architecture Design exam, candidates have to be familiar with the basic workings of the Azure cloud and be proficient in managing the cloud as well and hence are advised to go through the [AZ-900] Microsoft Azure Fundamentals and the [Az-104] Microsoft Administrator certification exams.

AZ-301: Exam Topics

Microsoft Architect design learning Path

The important domains covered in the [AZ-301] Microsoft Azure Architect Design certification exam are:

  • Determine workload requirements (10-15%)
  • Design for identity and security (20-25%)
  • Design a data platform solution (15-20%)
  • Design a business continuity strategy (15-20%)
  • Design for deployment, migration, and integration (10-15%)
  • Design an infrastructure strategy (15-20%)

It can be observed from the above exam topics that this exam is focused more on the designing of solutions and strategies related to various Azure services. This exam has a more theoretical approach rather than a hands-on one.

Which Exam To Go For First?

Both of these certifications are required for earning the Microsoft Certified: Azure Solutions Architect Expert Badge, However candidates are recommended to go through the [AZ-300] Azure Architect Technologies certification first and then focus on the [AZ-301] Azure Architect Design certification as the hands-on knowledge gained from AZ-300 would be immensely helpful in bringing clarity to the theoretical Designs of the AZ-301 exam.

Microsoft Architect Requirements

References/Related

Next Task For You

Interested in other Azure Certifications as well? Check out this blog post to know all about the [AZ-104] Microsoft Azure Certification Exam. Also, check out this blog to know everything about the [AZ-900] Azure Fundamentals Certification Exam.

Click on the register now button below to register for a free masterclass of our much-awaited AZ-300 Certification Training which will help you clear the exam with flying colors.Masterclass Enrollment

The post Microsoft Azure Solutions Expert: AZ-300 vs AZ-301 appeared first on Oracle Trainings.


Registry in Oracle Cloud(OCI)

$
0
0

OCI Registry (OCIR) is an Oracle-managed registry that simplifies your development by making it easy for you as a developer to store, manage and deploy container images, like Docker images securely within an Oracle Cloud.

Let’s start with an overview and use cases of Registry.

Overview of OCI Registry (OCIR)

OCIR is a highly available and scalable private container registry service for storing and sharing container images within the same regions as the deployments.

An integrated platform offering, where users can store their container images easily in one location. Access to push and pull images with the Docker CLI or images can be pulled directly into a Kubernetes deployment.

These container images are managed and deployed on the Container Engine for Kubernetes (OKE).

oci registry

Use cases of OCI Registry

  • OCIR can be used as a private Docker registry for internal use, pushing and pulling Docker images to and from the Registry using the Docker V2 API and the standard Docker command-line interface (CLI).
  • OCIR can also be used as a public Docker registry, any user with internet access, and knowledge of the appropriate URL can pull images from public repositories in OCIR.
  • Developers, testers, and CI/CD systems need to use a registry to store images created during the application development process. You can iterate on code faster and push it to production more frequently.
  • Oracle Functions and Events. The function code is packaged as a Docker image and pushed to OCIR and the event triggers can be configured in Events service to make sure when the function is invoked.

Benefits of OCI Registry

  • Integration: Full integration with Container Engine for Kubernetes (OKE)
  • Security: Registries are private by default but can be made public by an admin
  • Regional Availability: Pull container images quickly from the same region as your deployments
  • High availability: Leverages OCI for high performance, high availability, and low latency image push and pull
  • Anywhere Access: Use Docker CLI to push and pull images from anywhere – cloud, on-premises, or laptop

Pre-requisites for OCIR

  • User must have access to an Oracle Cloud Infrastructure tenancy.
  • User must have access to the Docker CLI
  • To use registry service, the user is either a part of the admin group or part of a group to which a policy grants
    the appropriate permissions
  • User needs to have an OCI username and auth token before being able to push/pull an image

Steps to configure OCI Registry

1. Navigate to Auth Tokens, under Resources and click Generate Token

Generate Auth token

2. Copy the token created, it will be used as password for accessing OCIR.

Auth token generated

3. Install Docker CLI on the local machine (windows in my case). Follow the steps for installation given here

4. Pull the hello-world image from Docker Hub and run it in the CLI.

docker run hello-world

5. To list image downloaded from Docker Hub, enter the following command

  docker images

docker images

6. Login to the registry in OCI using Docker CLI

  • In a terminal window on the client machine running Docker, enter the following command
    docker login <region-key>.ocir.io
  • Enter your username  and password
    Username: <tenancy-namespace>/<username>
    If tenancy is Federated with Oracle Identity Cloud Service
    Username: <tenancy-namespace>/oracleidentitycloudservice<username>
    Password: Auth token generated earlier (not visible)
    docker login

7. Locate the existing image hello-world that we downloaded in step 4  to push

  • In a terminal window, enter the command
    docker images
  • Tag the image that you are going to push to the registry. In our case, we are going to push the image named hello-world.
    docker tag <image-identifier> <target-tag>

tag docker image

8. Push your tagged docker image from the local machine to OCI registry

   docker push <target-tag>

push docker image

9. Pull the Docker image from OCI registry to the client machine

docker pull <region-key>.ocir.io/<tenancy-namespace>/<repo-name>/<image-name>:<tag>

pull image

Note: Docker images can be pulled with Docker CLI or directly into Oracle Container Engine for Kubernetes(OKE) Deployments. We have demonstrated using only the Docker CLI in this post.

10. We can see by navigating to the Registry created in OCI that the image is pushed into the hello registry

OCI Pushed Image

11. We can see the pull status from OCI console

OCI Pull Image

Related/Further Readings

Next Task For You

In our OCI Developer Associate [1Z0-1084] Certification training, we cover the OCI Registry in Operating Cloud-native Application module. In this training, we also cover Fundamentals of Cloud-native applications and how to develop, secure, and test cloud-native applications.

Click on the Join Waitlist now button below to join the waitlist of our much-awaited [1Z0-1084] Oracle Cloud Infrastructure Developer Associate Certification.

1Z0-1084 Waitlist

 

The post Registry in Oracle Cloud(OCI) appeared first on Oracle Trainings.

Billing and Cost Management in Oracle Cloud (OCI)

$
0
0

Nowadays, It is critical for organizations to keep track of Billing and Cost Management of various services(Compute, Database, Storage, Networking, IAM) in OCI.

Oracle Cloud Infrastructure provides Cost AnalysisBudgets, and Usage Reports through which you can analyze your spendings on different services, keep track of services used, and set a threshold on your spending.

In this blog, we will discuss a suite of tools Oracle provides to help you understand spending patterns, monitor consumption, analyze their bill, and, ultimately, reduce spending.

Pricing Models

Oracle offers several purchase models to help you maximize the potential from cloud services while optimizing the cost at the same time.

pricing model

  1. Pay-as-you-go (PAYG): Allows you to quickly provision services with no commitment, and pay only for what you use.
  2. Universal Credits-Monthly Flex: Select a monthly prepaid commitment and can consume any IaaS and PaaS cloud service anytime, anywhere.
  3. Bring your own license: Bring your current on-premise Oracle software licenses to equivalent, highly automated Oracle IaaS & PaaS services in the cloud.

In a nutshell, these are the three pricing models that Oracle provides. Read more about the pricing models here.

Benefits of Cost Management

Cost Management Benefits

Oracle Cost Management provides various enterprise-grade controls to maintain control over cloud cost:

  1. Predictability: Budgets ensure predictable cloud spending and prevent over usages.
  2. Control: Quotas help you centrally control usage of high-value cloud resources
  3. Visibility: Cost Analysis Dashboard help maintain visibility over the spending. Usage reports also help gain insight into resource-level visibility.
  4. Optimization: Ability to perform cost optimization and lower spend.
  5. Extensibility: Lets you leverage cloud management and BI tools you already know and use.

Pre-requisites

  • User must have access to an Oracle Cloud Infrastructure tenancy
  • To use registry service, the user is either a part of the admin group or part of a group to which a policy grants
    the appropriate permissions

Steps to Configure

Cost Analysis Dashboard

OCI Cost Analysis

Cost Analysis is a visualization tool that helps understand spending patterns at a glance. To use this tool, user must be a member of the Administrators group. Cost Analysis filters cost by Date, Tags, and Compartments.

cost management dashboard

Use the Cost Analysis dashboard to view your spending by service or by department, compartment, or cost tracking tag.

1) In Start Date, select a date. In End Date, select a date (within six months of the start date). Click Apply Filters

filter by date

2) From Tag Key, select a tag. Click Apply Filters

filter by tag

3) From Compartment, select a compartment. Click Apply Filters

filter by compartment

OCI Budgets

Budget is used to track actual spending for the whole tenancy or per compartment. It is also used to set alerts on your budgets at predefined thresholds to get notified.

The following concepts are essential for budgets:

  • Budget: A monthly threshold you define for your cloud spending.
  • Alert: Email alerts that get sent out for your budget

1) Navigate to Budgets in OCI Console and click on Create Budget

create budget

2) Sample Budget Alert Emails

budget alert email

OCI Usage Reports

A usage report is a comma-separate value (CSV) file contains detailed information about your OCI resources consumption. It is generated daily and stored in an object storage bucket.

It can be used in conjunction with your rate card for:

  • Invoice reconciliation
  • Custom reporting
  • Cross-charging
  • Cost optimization
  • Resource inventory

Sample Dashboard from a Usage Report

Usage report dashboard

Read more about Usage Reports https://k21academy.com/oci46.

Service Limits and Usage

When you sign up for Oracle Cloud Infrastructure, a set of service limits are configured for your tenancy. The service limit is the quota or allowance limit on a resource. Your tenancy’s limits, quotas, and usage can be seen in the Console. Service limits can be increased from within the Console after submitting a request.

Service Limits

Compartment Quotas

Quotas give you better control over how resources are consumed by letting you allocate resources to projects or departments. Compartments help you restrict usage to a small set of resources, restrict resource counts or disable services as necessary. Similar to Service Limits; but service limits are set by Oracle, and compartment quotas are set by administrators.

compartment quotas

Oracle Cloud Workload Cost Estimator

Oracle launched Oracle Cloud Workload Estimator last month which shows a drastic difference between the computing scenarios of Oracle Cloud Infrastructure and AWS.

This tool offers an apples-to-apples comparison of key workloads between Oracle Cloud infrastructure and that of AWS. You will be amazed to see the results that show “Oracle is a more cost-efficient option for many high-performance applications.”

oracle cloud workload estimator

To know more about Oracle Cloud Workload Estimator, check here.

Cost Management Best Practices

Let’s take a look at some best practices for cost management:

  • Create a budget that matches your commitment amount and an alert at 100 percent of the forecast. This gives you an early warning if your spending increases and you’re at risk of getting an overage.
  • Use compartments for cost management along with access-control. Many customers set up one compartment per department for cost management and cross-charging.
  • Use cost-tracking tags (like cost-center) to allocate costs in more granular ways.
  • Enable monitoring on all resources. Monitoring data can be merged with cost data to gain powerful insights on how to improve resource utilization.
  • Usage reports are also used to analyze costs and drive custom solutions.

Related/Further Readings

Next Task For You

In our OCI Developer Associate [1Z0-1084] Certification training, we cover fundamentals of Cloud-native applications in Cloud-native Fundamentals module. In this training, we also cover How to develop, secure, test, and operate cloud-native applications.

Click on the Join Waitlist now button below to join the waitlist of our much-awaited [1Z0-1084] Oracle Cloud Infrastructure Developer Associate Certification.

1Z0-1084 Waitlist

The post Billing and Cost Management in Oracle Cloud (OCI) appeared first on Oracle Trainings.

Kubernetes Architecture | Kubernetes Components | Kubernetes Master & Worker Node | Managed Kubernetes Service

$
0
0

This post is the third video of our five-part video series on “Docker & Kubernetes”.

In this video blog, we are going to cover the Kubernetes architecture, Kubernetes components, Managed Kubernetes Service Also, we are discussing what is Kubernetes master node & worker node and it’s components.

Note: If you have missed my previous post on “Docker Architecture | Docker Engine Components | Container Lifecycle”, to check previous post click here https://k21academy.com/docker13

<Video>

What Is Kubernetes? 

In organizations, multiple numbers of containers running on multiple hosts at a time so it is very hard to manage all the containers together we use Kubernetes. Kubernetes is an open-source platform for managing containerized workloads and services. Kubernetes takes care of scaling and failover for your application running on the container.

Note: know more about the containers(Docker) & Kubernetes

Kubernetes Architecture Key Points

1) In Kubernetes architecture, there is one or more master and multiple nodes. One or masters used to provide high-availability.

2) The master node communicates with nodes using API-server to kublet communication.

3) In the worker node, there are one or more pods and pods contain one or more containers.

4) Containers can be deployed using the image also can be deployed externally by the user.

Kubernetes Components

Kubernetes Component

Kubernetes Master Node

Master Node is a collection of components like Storage, Controller, Scheduler, API-server that makes up the control plan of the Kubernetes. When you interact with Kubernetes by using CLI you are communicating with the cluster’s Kubernetes master. all the processes run on a single node in the cluster, and this node is also referred to as the master. There can be more than one master node in the cluster.

Master Node Components

1) Kube API-server performs all the administrative tasks on the master node. A user sends the rest commands as YAML/JSON format to the API server, then it processes and executes them. The Kube API-server server is the front end of the Kubernetes control plane.

2) etcd is a distributed key-value store that is used to store the cluster state. Kubernetes stores the file in a database called the etcd. Besides storing the cluster state, etcd is also used to store the configuration details such as the subnets and the config maps.

3) Kube-scheduler is used to schedule the work to different worker nodes. it also manages the new requests coming from the API Server and assigns them to healthy nodes.

4) Kube Controller Manager the task of the Controller is to obtain the desired state from the API Server. if the desired state does not meet the current state of the object, then the corrective steps are taken by the control loop to bring the current state the same as the desired state.

There are different type of control manager in Kubernetes such as

  • Node Manager, it manages the nodes. it creates new nodes if any node unavailable or destroyed.
  • Replication Controller, It manages if the desired number of containers is running in the replication group.
  • Endpoints controller, it populates the endpoints object that is, joins Services & Pods.

Kubernetes Worker Node

The worker nodes in a cluster are the machines or physical servers that run your applications. The Kubernetes master controls each node. there are multiple nodes connected to the master node. On the node, there are multiple pods running and there are multiple containers running in pods.

Worker Node Components

 1) The kubelet is an agent that runs on each worker node and communicates with the master node. It also makes sure that the containers which are part of the pods are always healthy. It watches for tasks sent from the API Server, executes the task like deploy or destroy the container, and then it reports back to the Master.

2) Kube-proxy is used to communicate between the multiple worker nodes. it maintains network rules on nodes also it makes sure there are necessary rules define on the worker node so the container can communicate to each in different nodes.

3) Kubernetes pod is a group of one or more containers that are deployed together on the same host. Pod is deployed with a shared storage/network, and a specification for how to run the containers. Containers can easily communicate with other containers in the same pod as though they were on the same machine.

4) Container Runtime is the software that is responsible for running containers. Kubernetes supports several container runtimes: Docker, containers.

Managed Kubernetes Service

In Kubernetes, both the master node and worker nodes are managed by the user. But in managed Kubernetes service third-party providers manage Master node & user manages Worker node also manage Kubernetes offers dedicated support, hosting with pre-configured environments. Managed solutions take care of much of this configuration for you.

Managed Kubernetes Service Example:

a) Azure Kubernetes Service (AKS)

Note: know more about Azure Kubernetes Service

b) Oracle Kubernetes Engine (OKE)

Note: know more about Oracle Kubernetes Engine

c) Elastic Kubernetes Service (EKS)

Note: know more about Elastic Kubernetes Service

d) Google Kubernetes Engine (GKE)

Note: know more about Google Kubernetes Engine

Related Post

Join FREE Masterclass

To know about what is the Roles and Responsibilities of Kubernetes administrator, why you should learn Docker and Kubernetes, Job opportunities for Kubernetes administrator in the market, and what to study Including Hands-On labs you must perform to clear Certified Kubernetes Administrator (CKA) certification exam by registering for our FREE Masterclass.

Click on the below image to Register Our FREE Masterclass Now!Free Masterclass

The post Kubernetes Architecture | Kubernetes Components | Kubernetes Master & Worker Node | Managed Kubernetes Service appeared first on Oracle Trainings.

1Z0-1072-20 | Oracle Cloud Infrastructure 2020 Architect Associate

$
0
0

[New Update: 1st June 2020] This blog post covers everything you must know if you are appearing for the 1Z0-1072-20 Oracle Cloud Infrastructure 2020 Architect Associate Certification.

***This Certification is the next version of Oracle Cloud Infrastructure 2018 Architect Associate [1Z0-932]Oracle Cloud Infrastructure 2019 Architect Associate

***[1Z0-1072] exam retires on 30-Jun-2020. You can take scheduled exams till 31-Oct-2020; however, you MUST register prior to 30-Jun-2020.

What Is [1Z0-1072-20] OCI 2020 Architect Associate?

The Oracle Cloud Infrastructure 2020 Architect Associate exam is designed for individuals who possess a strong foundation knowledge in architecting infrastructure using Oracle Cloud Infrastructure services.

This certification validates a deep understanding of OCI services to spin up infrastructure and provides a competitive edge in the industry

  • Note: To know more about the Oracle Cloud Infrastructure (OCI) building blocks like RegionAvailability Domain (AD), Fault Domain (FD) TenancyCompartmentCompute, Virtual Cloud Network (VCN), Identity & Access Management (IAM), and Storage (Block, Object, Shared, Archive) CHECK HERE

Prerequisite For 1Z0-1072-20

There is no Pre-requisite for this certification, you can go for this exam directly. Up-to-date OCI learning and hands-on experience are recommended.

Note: This certification comes under the Cloud Recertification policy.  

Exam Details (1Z0-1072-20)

  • Name of the Certification: [1Z0-1072-20] Oracle Cloud Infrastructure 2020 Certified Architect Associate
  • Platform: Available on Oracle University and delivered via Pearson VUE.
  • Exam Duration: 85 minute
  • Exam Number: 1Z0-1072-20
  • Number of Questions: 60
  • Passing score: 65%
  • Exam Cost: $150 or INR 10,475

Note: 25% Discount on the listed price is offered to those who are an OPN. 

What Topics to Learn for Oracle Cloud Infrastructure 2020 Architect Associate?

If you planning to take this exam, you have to be well prepared with the topics such as Cloud computing concepts (HA, DR, Security), regions, availability domains, OCI terminology and services, networking, databases, Autonomous Database, load balancing, FAST CONNECT, VPN, Compartments, Identity and Access Management, and tagging. So once you are done with learning these topics & doing Hands-on Cloud, you are ready for the exam.

1) Launching Bare Metal & Virtual Compute Instances

In this, we will learn how to create Virtual Machine and Bare Metal instances using the Oracle Cloud Infrastructure Compute Service.

1Z0-1072 Exam

2) Database & Advanced Database

The Database service offers autonomous and user-managed Oracle Database cloud solutions.

  • Autonomous databases are preconfigured, fully-managed environments that are suitable for either transaction processing or for data warehouse workloads.
  • User-managed solutions are bare metal, virtual machine, and Exadata DB systems that you can customize with the resources and settings that meet your needs.

Note: To know more about the Oracle Cloud Database Options (VMDB, BMDB, ExaCS, ExaCS & Autonomous (ADW, ATP) CHECK HERE

1Z0-1072 OCI

3) Compute

Oracle Cloud Infrastructure Compute lets you provision and manages compute hosts, known as Instances You can launch instances as needed to meet your compute & application requirements.

1Z0-1072 Certification

4) Identity & Access Management

Oracle Cloud Infrastructure Identity and Access Management (IAM) lets you control who has access to your cloud resources. You can control what type of access a group of users have and to which specific resources.

OCI 1Z0-1072

5) Networking Basic & Advanced Concepts

When you work with Oracle Cloud Infrastructure, Whether you are deploying Database or Application the very first thing you will do is create a Network (VCN & Subnet)

  • You then will decide which part of Application/Database is in what Subnet, What Ports to open across Subnet, How Primary Database talks to DR, Where to Deploy LoadBalancer for HA & Networking across Region.

Note: To learn more about Networking in Cloud: Who Should Learn & Why, CHECK HERE

Oracle Cloud OCI

6) Storage & Load Balancer

Oracle offers Local NVMe SSD’s and Block Volumes for IO-intensive application types, File Storage for enterprise applications, Object Storage for internet accessible storage of unstructured data, and Archive Storage for a long term reliable archival. Each is manageable through the console and by CLI.

Load Balancer provides automated traffic distribution from one entry point to multiple servers in VCN or Load balancer automatically distributes traffic to list healthy backend servers based on

  • Health Check Policy
  • Load Balancing Policy

OCI

7) Architecting Best Practices

OCI Oracle Cloud

New Topics added in 1Z0-1072-20

Practice the Hands-On Lab (HOL) thoroughly with the help of the Step-by-Step Activity Guides to Clear the Exam blog post.

Registration

Register for the 1Z0-1072-20 exam at Oracle’s official website i.e. Oracle Cloud Infrastructure 2020 Architect Associate

Related/References

Begin Your Cloud Journey

Begin your journey towards becoming an Oracle Cloud Certified Architect Associate by joining the FREE MasterClass on How To Become Oracle Certified Cloud Architect Associate in 8 Weeks

Click on the image below to Register for the FREE Masterclass NOW!Oracle Certified Cloud Architect

FREE Community

The post 1Z0-1072-20 | Oracle Cloud Infrastructure 2020 Architect Associate appeared first on Oracle Trainings.

Kubernetes vs Docker | Docker Limitations

$
0
0

In this blog post, we are going to cover the most common question we got in our FREE webinar Kubernetes vs Docker and Docker Limitations. Docker is a platform as a service(paas) product that is used to use applications on containers and Kubernetes is a container orchestration platform used to manage multiple containers.

Note: Watch our five-part video series on “Docker & Kubernetes” in these series we covered topics Docker vs Virtual Machine, Docker Architecture, Kubernetes Architecture, Kubenrtees high availability, Q/A Docker & Kubernetes.

<Video>

What is docker?

Docker is an open-source platform based on Linux containers for developing, shipping, and running applications inside containers. we can deploy many containers simultaneously on a given host. Containers are very fast and lightweight because they don’t need the extra load of a hypervisor because they run directly within the host machine’s kernel.

What is Kubernetes? 

In organizations, multiple numbers of containers running on multiple hosts at a time so it is very hard to manage all the containers together we use Kubernetes. Kubernetes is an open-source platform for managing containerized workloads and services. Kubernetes takes care of scaling and failover for your application running on the container.

Note: know more about the containers(Docker) & Kubernetes

Kubernetes vs Docker

A fundamental difference between Kubernetes and Docker is that Kubernetes is meant to run across a cluster while Docker runs on a single node. Docker is the most popular container platform and Kubernetes is a platform for managing containerized workloads. Kubernetes can work with any containerization technology. Kubernetes helps with networking, load-balancing, security, and scaling across all Kubernetes nodes which runs your containers.

Docker Limitations

Docker Limitations

a) Dynamic IP: The problem with docker is it uses dynamic IP address. When we restart the container the IP address will change because IP is not static in Docker.

b) Ephemeral data storage: In docker storage, we have Ephemeral storage, not persistence storage due to this all of the data inside a container disappears forever when the container shuts down.

c) Confined to Single Host: In Docker, we can multiple containers on different hosts. Inside a single host we can connect multiple containers to each other via a bridge network but we can’t connect two containers that are running on different hosts.

Related Post

Join FREE Masterclass

To know about what is the Roles and Responsibilities of Kubernetes administrator, why you should learn Docker and KubernetesJob opportunities for Kubernetes administrator in the market, and what to study Including Hands-On labs you must perform to clear Certified Kubernetes Administrator (CKA) certification exam by registering for our FREE Masterclass.

Click on the below image to Register Our FREE Masterclass Now!Free Masterclass

The post Kubernetes vs Docker | Docker Limitations appeared first on Oracle Trainings.

Draw an OCI Architecture Diagram For Architects | OCI Designer ToolKit | OKIT

$
0
0

When you are designing an infrastructure the first thing you need is an architecture diagram. There are various tools that can be used to build an OCI Architecture diagram like Visual ParadigmDraw.io graphicsCacoo template, OKIT and many more. These tools are really helpful in understanding and preparing an OCI infrastructure diagram. These diagrams are a great way to represent the overall design, deployment and topology and provide a bird’s eye view on the architecture.

From all the tools mentioned above, OKIT might be the best as it was released recently and offers much more than the other tools. In this blog, we will be discussing:

  • Overview of OKIT
  • Interface of OKIT
  • Steps to Install OKIT
  • Accessing OKIT from your browser
  • Creating a Custom Template

Overview Of OKIT

OCI Designer ToolKIT (OKIT) is an Open-Source browser-based Design tool for OCI which provides a very fast designing of a complete OCI based Infrastructure. It is a Drag and Drop application through which admins can prepare an OCI based Infrastructure and can also export it in Ansible / Terraform scripts.

It is a graphical environment through which you can create and visualize OCI environments by using your web browser. The interface is extremely simple and easy to use which makes it fast to deploy and use. It allows designers or architects to visualize and create an infrastructure and then let them export in different formats such as:

  • svg
  • png
  • jpeg

This toolkit can further be used to add key property information to the build infrastructure which can further be exported to different frameworks like:

  • Ansible
  • Terraform
  • OCI Resource Manager

Interface Of OKIT

The OCI Designer ToolKIT provides a simple and minimalistic interface which can be divided into three parts.

OKIT

Palette

It is present on the left side of the screen on which you have access to different OCI artefacts. These icons can be dragged and dropped into Canvas to build an infrastructure.

Canvas

Present on the center of the screen where the infrastructure diagram is built. Initially, it will only have a compartment and you will have to drag and drop artefacts from the palette.

Properties

On the right side of the screen, we have Properties slide-out panel through which you can edit the properties if the selected artefacts which will then be implemented when we export them for different frameworks.

Steps To Install OKIT

Step 1: Connect to your instance(on which you want to install OKIT) using PuTTY.

Step 2: Install Git using the below commands:

$ sudo yum install -y git

Git Install

Step 3: Install Docker using the below commands:

$ sudo yum install -y yum-utils device-mapper-persistent-data lvm2
$ sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
$ sudo yum install -y docker-ce
$ sudo systemctl start docker

Docker Install

Step 4: Install CLI using the below command:

$ bash -c "$(curl -L https://raw.githubusercontent.com/oracle/oci-cli/master/scripts/install/install.sh)"

Installing CLI

When prompted for Install directory and every other resource path, Press Enter (choose default).

When prompted for ‘Y/N’ for $Path, Enter Y, when prompted for path for rc file Press Enter (choose default).

Once it is installed, check the OCI CLI Version by typing:

 $ oci -v

OCI Version

Step 5: Now we need to configure OCI to create OCI config file. Use the below command

$ oci setup config

When prompted for directory press ENTER (choose Default) then paste the OCID of user, tenancy and then write the region.

Press Y and then Enter when asked for new API Signing Key. Leave directory, name and passphrase empty.

This will configure the OCI config file.

Now add the newly created public key into API keys present under your User.

Step 6: Clone OCI Designer toolkit form Git using the below command:

$ git clone -b v0.5.1 --depth 1 https://github.com/oracle/oci-designer-toolkit.git

Once it has been cloned, change directory to oci-designer-toolkit and build the Docker image using these commands:

$ cd oci-designer-toolkit
$ sudo docker build --tag okit --file ./containers/docker/Dockerfile --force-rm ./containers/docker/
You will get output like this
Step 7: Once all the above steps are done, just run the docker image we just build by using the below command:
$ sudo docker run -d --rm -p 80:80 \
           --name okit \
           --hostname okit \
           -v ~/.oci:/root/.oci \
           -v `pwd`/okitweb:/okit/okitweb \
           -v `pwd`/visualiser:/okit/visualiser \
           -v `pwd`/log:/okit/log \
           okit
Running this command will give you an output as shown below:
container id
This completes the installation of OKIT on your Linux Instance in OCI.
Note: The above command can be used with -it instead of -d, running a docker image with -it starts the server interactive so when you close the command it shuts down. The -it is useful for seeing the server errors / whats happening accessing http://localhost/okit/designer will display the designer and if not you should see the error where the command is running.

Accessing OKIT From Your Browser

Once you have configured OKIT on your instance you just need to check if your Security List allows it to connect to it.

So, go into you OCI account and Security List of your Public Subnet and add an Ingress Rule as shown below:

Once you have added this Rule, you can now access it from your Local machine’s Browser from the below URL:

http://<Instance’s Public IP>/okit/designer

OKIT

Creating A Custom Template

OKIT lets you build and save a custom template. Creating a template is very easy and fast using this toolkit. Let’s take an example of Load Balancer front-ending a pair of instances.

Select “New” from the menu and Use the Drag & Drop features of OKIT to create the following architecture.

You can now save the diagram and it will be available to resue.

Related/Further Readings

Next Task For You

In our OCI Architect Associate [1Z0-1072] Certification training, we cover this in IAM which is Module 2 in our training.

Begin your journey towards becoming an Oracle Cloud Architect by Joining the FREE Masterclass on How To Become Oracle Cloud Architect in 8 Weeks.

OCI Learning Path

Click on the image below to Register for the FREE Masterclass NOW!

The post Draw an OCI Architecture Diagram For Architects | OCI Designer ToolKit | OKIT appeared first on Oracle Trainings.

AI 100 | Azure AI Engineer Associate Exam | All You Need To Know

$
0
0

Artificial Intelligence (AI) is one of the hottest buzzwords in the IT world right now and the rise in growth of AI is glaringly visible. So, do you also desire to be a  Microsoft Certified Azure AI Engineer Associate? If you are craving to clear this exam and get certified as an Azure AI Engineer Associate, then you are at the right place!

What is Azure AI Engineer Associate Certification?

Microsoft Certified Azure AI Engineer Associate Tag

The Azure AI Engineer Associate certification validates the skills and knowledge to use Cognitive Services, Machine Learning, and Knowledge Mining to architect and implement Microsoft AI solutions involving natural language processing, speech, computer vision, bots, and agents.

Why Use Azure AI solutions?

  • Gives a cloud platform for implementing AI Solutions
  • Provide no-code ML models for processing data.
  • Implement and monitor AI solutions
  • You can design AI as cost-effective Intelligent Edge solutions.
  • You can design and identify data governance, and requirements.

AI 100 | Benefits

  • As a result of the rocketing growth of the AI, the importance of an Azure AI Engineer associate is also visible.
  • The capabilities of translating the vision of solution architects for the development of end-to-end solutions.
  • Azure AI engineers have to work in collaboration with data engineers, data scientists, AI developers, and IoT specialists as all of these roles are interdependent.

AI 100 | Details

  • Name of the Exam: Designing and Implementing an Azure AI Solution
  • Exam Code: AI-100
  • Technology: Microsoft Azure
  • Prerequisites: None
  • Exam Duration: 220 minutes
  • Number of Questions: Almost 60 questions
  • Registration Fee: $165 USD(plus applicable taxes)
  • Exam Language: English, Korean, Japanese and Simplified Chinese

To apply for Microsoft Azure AI Engineer Associate certification click here

AI-100 exam pricing

Course Outline

  1. Analysis of solution requirements (25%-30%)
  2. Designing AI solutions (40%-45%)
  3. Implementing and monitoring AI solutions (25%-30%)

To see the blueprint which describes this course in detail click here.

Who This Certification Is For?

After all this, the question that strikes you is ‘Am I the one for this certification?!’ Well, here is the answer:

  • Candidates who are interested in Machine Learning and AI.
  • IT professionals who have a thorough knowledge of Microsoft Azure.
  • Those working as Azure Engineers.
  • People working as Data Engineers.

Related/References

The post AI 100 | Azure AI Engineer Associate Exam | All You Need To Know appeared first on Oracle Trainings.


Microsoft Certified Azure Data Scientist Associate | DP 100 | All You Need To Know

$
0
0

Many people in the IT Industry are thinking that it’s a great time to be a data scientist these days, do you feel the same too?  Talking of a buzz-worthy career, data science has become one of the fastest-growing disciplines. So, that is the reason why the DP 100 exam is taking over the job market at a very brisk pace.

Let’s see what Microsoft Certified Azure Data Scientist Associate (DP 100) is.

What Is Azure Data Scientist Certification?

Microsoft Certified: Azure Data Scientist Associate tag

The DP 100 Microsoft Azure Data Scientist Certification is aimed towards those who apply their knowledge of data science and machine learning to implement and run machine learning workloads on Azure, using Azure Machine Learning Service.  This implies planning and creating a suitable working environment for data science workloads on Azure, running data experiments, and training predictive ML models.

Why You Should You Learn Data Science?

There is a lot of raw data generated per day in most of the IT Industries, so they need a dedicated team who can evaluate this data, plot this data to make inferences, and apply the Machine Learning algorithm to make predictions.  Hence there is a huge gap in demand and supply of Data Scientists.

The average salary for a Data Scientist is $117,345/yr as of some resources. This is above the national average of $44,564. Hence, a Data Scientist makes 163% more than the national average salary!

Azure DP 100 certification | Benefits

  • Increase in demand for Data Scientists. The CV with this gleaming certification will have an enormous advantage.
  • In terms of job prospects and earning, a certification leads to a rampant gain in both.
  • Most of the people agree that certification has improved their earning and 84% of people have seen better job prospects after getting certified.
  • Updating your profile with this certificate will boost your job profile and shoot up your chances of getting chosen.

Prerequisites

  • Fundamental knowledge of Microsoft Azure
  • Experience of writing Python code to work with data, using libraries such as Numpy, Pandas, and Matplotlib.
  • Understanding of data science; including how to prepare data, and train machine learning models using common machine learning libraries such as Scikit-Learn, PyTorch, or Tensorflow.

Microsoft Certified Azure Data Scientist Associate | Details

  • Certification Name: [DP-100] Microsoft Certified: Azure Data Scientist Associate.
  • Prerequisites: There are no prerequisites for taking this certification.
  • Exam Duration: 180 minutes
  • Number of Questions: 40 – 60
  • Passing Score: 700
  • Exam Cost: USD 165.00

To apply for Microsoft Certified Azure Data Science Associate (DP-100) click here.

DP-100 exam pricing

Course Outline

The following domains are the torch-bearers of the DP-100 exam.

  1. Set up an Azure Machine Learning Workspace (30-35%).
  2. Run Experiments and Train Models (25-30%).
  3. Optimize and Manage Models (20-25%).
  4. Deploy and Consume Models (20-25%).

Learning Path and domains covered in the DP-100 exam

Who This Certification Is for?

After all this, you will be waiting to know that are you the one for this certification right? Well, here is your answer to that,

  • Candidates who are interested in Machine Learning and AI.
  • IT professionals who have a thorough knowledge of Microsoft Azure and some knowledge of data handling.
  • People who are good at statistics.
  • Data Scientists who prepare data, train models, and evaluate competing models but have never done this on Azure.

Related/References:

The post Microsoft Certified Azure Data Scientist Associate | DP 100 | All You Need To Know appeared first on Oracle Trainings.

AZ-204 | Azure Developer Associate | Everything You Need To Know

$
0
0

The urge of enterprises to move their operations to the cloud is increasing every day. Public cloud service providers are persistently upgrading their services with the offering of new developments in technology. Hence, the demand for cloud computing certifications like Az 204 is also becoming noteworthy every day.

If you have been looking for the best source to learn about the AZ-204 exam preparation, then you have at the right place already!

This blog covers everything you need to know for the Microsoft certified AZ 204 exam preparation. So, let’s jump in to start with what looks like a promising career ahead!

What is Azure Developer Certification? 

AZ 204 Badge

Azure development certification is aimed at those whose responsibilities include participating in all phases of cloud development from requirements definition and design, to development, deployment, and maintenance. performance tuning, and monitoring.

Why Become an Azure Developer?

  • By getting your certification in this field, it’s likely you will see career growth which in turn increases your earnings.
  • If flexibility is what you are looking for when it comes to your career, a Microsoft Azure certification can offer you just that.
  • You learn how theoretical concepts can be used to solve business problems.

AZ 204 | Certification Benefits 

  • Cloud Technology is the past, present, and future.
  • As discussed earlier the transitions of the industries lead to the demand for Cloud Computing professionals will continue to grow.
  • Better chances of getting shortlisted for an interview with the certificate.
  • It provides security in your job.
  • This certification is a prerequisite for DevOps exam, click here to know everything about it.

AZ 204 | Exam Details

  • Name of the Exam: Microsoft Certified Azure Developer Associate
  • Exam Code: AZ-204
  • Technology: Microsoft Azure
  • Prerequisites: A candidate for this certification should have 1-2 years of professional development experience and experience with Microsoft Azure and also needs to have programming expertise in a high-level language supported by Azure.
  • Exam Duration: 120 minutes
  • Number of Questions: 40-60
  • Registration Fee: $165 USD(plus applicable taxes)
  • Exam Language: English, Korean, Japanese and Simplified Chinese

To apply for Microsoft Azure Developer Associate certification click here.

AZ 204 exam fee

AZ 204 | Exam Domains

  1. Develop Azure compute solutions (25-30%)
  2. Develop for Azure storage (10-15%)
  3. Implement Azure security (15-20%)
  4. Monitor, troubleshoot, and optimize Azure solutions (10-15%)
  5. Connect to and consume Azure services and third-party services (25-30%)

To get detailed information on these domains, do go through the blueprint which can be accessed here

AZ 2-4 learning path

Who is This Certification For?

If you are still wondering if you can get this certification, here is your answer:

  1. Candidates working as Azure Engineers.
  2. Those who have subject matter expertise designing, building, testing, and maintaining cloud applications.
  3. People who are cloud DBAs and cloud administrators.
  4. People who have a basic understanding of networking and virtualization.
  5. Those who are good at the implementation of cloud solutions.

Related/References

 

The post AZ-204 | Azure Developer Associate | Everything You Need To Know appeared first on Oracle Trainings.

Amazon EKS | Kubernetes On AWS

$
0
0

Kubernetes cluster is used to deploy containerized applications on the cloud. Kubernetes uses the same underlying infrastructure, OS,  and container. In this post, we are going to cover all about Amazon EKS (Elastic Kubernetes Service) used to deploy applications on AWS.

Cloud Vendors adopted Kubernetes in different ways.

  • AWS – Elastic Kubernetes Service (EKS)
  • Microsoft Azure – Azure Kubernetes Service (AKS)
  • Google Cloud Platform – Google Kubernetes Engine (GKE)
  • Oracle Cloud – Oracle Kubernetes Engine (OKE)
  • Digital Ocean – Digital Ocean Kubernetes Service (DOKS)

Containers & Kubernetes

Containers are a software package into an invisible box with everything that the application needs to run. Docker containers are built off Docker images.

To know about the difference between Docker & Virtual machine click here https://k21academy.com/docker12

Kubernetes is an open-source platform for managing containerized workloads and services. Kubernetes takes care of scaling and failover for your application running on the container.

To know the difference between Kubernetes and Docker click here http://k21academy.com/kubernetes16

Overview Of EKS

Amazon EKS is a managed service that is used to run Kubernetes on AWS. Using EKS users doesn’t have to maintain a Kubernetes control plan on their own. It is used to automate the deployment, scaling, and maintaining the containerized application. It works with most of the operating systems.

EKS is integrated with various AWS services:

  • ECR (Elastic Container Registry) for container images.
  • Elastic Load Balancer for distributing traffic.
  • IAM for providing authentication and authorization.
  • VPC (Virtual Private Cloud) for isolating resources.

EKS in AWS

Benefits Of Using EKS

  • No setup required to configure Kubernetes on AWS.
  • In this users need not create a control plan.
  • Worker nodes are also managed by Amazon EKS
  • EKS integrates with various AWS tools.

Note: Using ECR we have to manage the underlying OS, infrastructure, and container engine but using EKS we only have to provide containerized application, and rest is managed by EKS.

ECS vs EKS

Components Of EKS

1) Nodes: A node is a physical or virtual machine. In EKS both Master Node and Worker Node are managed by EKS. There are two types of nodes.

  • Master Nodes: These are responsible for the control plan of Kubernetes Cluster and managed by EKS
    • API Servers: It controls the API servers whether it is kubctl (Kubernetes CLI) or rest API.
    • etcd: It is a highly available key-value store that is distributed among the Kubernetes cluster to store configuration data.
    • Controller Manager: It makes sure that you are using as much as the container needed at a point of time. It keeps a count of containers used and also records the state.
    • Scheduler: It validates that what and when the work needs to be done. It integrates with the Controller manager and API servers.
  • Worker Nodes: It is responsible for the Data plan of the Kubernetes environment.
    • kublet: It controls the flow to and fro from the API. It makes sure containers are running in the pod.
    • kubproxy: It includes networking rules and access control. It is like a firewall.

components of kubernetes

2) Pods: A group of containers is called pods. They share networking, storage, IP address, and port spaces.

3) DaemonSet: It makes sure that all node runs a copy of a certain pod. It is like a monitoring tool.

4) Job: It tracks and establishes the state of individual work.

EKS Workflow

eks workflow

  1. Provision EKS cluster using AWS Console, AWS CLI, or one of the AWS SDKs.
  2. Deploy worker nodes to the EKS cluster. There is already a predefined template that will automatically configure nodes.
  3. Now we configure Kubernetes tools such as kubctl to communicate with the Kubernetes cluster.
  4. We are now all set to deploy an application on the Kubernetes cluster.

EKS Pricing

Though the pricing of various services in AWSis dynamical, so it is recommended to check the pricing before deploying clusters.

As a standard, we have to pay 0.10$ /hour for each Amazon EKS cluster and we can deploy multiple applications on each EKS cluster. We can run EKS using either EC2 or AWS Fargate, and on-premises using AWS outposts.

To know more about Amazon EKS (Elastic Kubernetes Service) click here

Reference/Related Post

Join FREE Masterclass

To know more about Docker and Kubernetes for beginners, why you should learnJob opportunities, and what to study Including Hands-On labs you must perform to clear CKA certification Exam.

Click on the below image to Register Our FREE Masterclass Now!Free Masterclass

The post Amazon EKS | Kubernetes On AWS appeared first on Oracle Trainings.

Azure DevOps Environments | How To Setup DevOps Environment | Approval Checks | Azure DevOps Pipeline

$
0
0

This blog gives a step by step walkthrough of implementing CICD in an Azure environment using YAML and pipelines.

You can visit our previous blog to know more about the [AZ-400] Microsoft Azure DevOps certification. 

What Is An Azure DevOps Environment?

An environment is a collection of resources that can be targeted by deployments from a pipeline. Environments can include Kubernetes clusters, Azure web apps, virtual machines, databases. Typical examples of environment names are Dev, Test, QA, Staging, and Production.

The advantages of using environments include the following.

  • Deployment history — Pipeline name and run details are recorded for deployments to an environment and its resources. In the context of multiple pipelines targeting the same environment or resource, the deployment history of an environment is useful to identify the source of changes.
  • Traceability of commits and work items — View jobs within the pipeline run that target an environment. You can also view the commits and work items that were newly deployed to the environment. Traceability also allows one to track whether a code change (commit) or feature/bug-fix (work items) reached an environment.
  • Diagnose resource health — Validate whether the application is functioning at its desired state.
  • Permissions — Secure environments by specifying which users and pipelines are allowed to target an environment.

How To Setup DevOps Environment In Azure

  • Step 1: Sign in to our Azure DevOps organization and navigate to our project.
  • Step 2: In our project, navigate to the Pipelines page. Then choose Environments and click on Create Environment.

Environment creation

  • Step 3: After adding the name of an environment (required) and the description (optional), we can create an environment.

  • Step 4: Resources can be added to an existing environment later as well.

new environment screen

  • Step 5: Then we can get started by add this newly added environment into the YAML code.

pipeline code

  • Step 6: Once after we added that environment into the YAML pipeline, we need to start running it.

Pipeline dashboard and summary

  • Step 7: We can see the multiple environments as part of this YAML, basically, we are adding the CICD into the YAML file.

  • Step 8: From the above environment, we can see the below deployment happened successfully

pipeline deployment confirmation screen

 

  • Step 9: Now after this we can log in to the Azure portal and then check if the below deployment is successful or not.

  • Step 10: Now when we click on the below URL, we can see the basic build and deployment of the “HELLO WORLD” application as a sample NodeJS.

pipeline URL check

 

Using Approval Checks

We can also manually control when a stage should run using approval checks. You can use approval checks to control deployments to production environments.

Checks are a mechanism available to the resource owner to control when a stage in a pipeline consumes resources.

As the owner of a resource, such as an environment, we can define approvals and checks that must be satisfied before a stage consuming that resource starts. Currently, manual approval checks are supported by environments.

We can control who can create, view, use, and manage the environments with user permissions. There are four roles — Creator (scope: all environments), Reader, User, and Administrator. In the specific environment’s user permissions panel, we can set the permissions that are inherited and we can override the roles for each environment.

  • Navigate to the specific Environment that you would like to authorize.
  • Click on the overflow menu button located at the top-right part of the page next to “Add resource” and choose Security to view the settings.
  • In the User permissions blade, click on +Add to add a user or group and select a suitable Role.

 Azure DevOps Pipeline | How To Use

Pipeline permissions can be used to authorize all or selected pipelines for deployment to the environment.

  • To remove Open access on the environment or resource, click the Restrict permission in Pipeline permissions.
  • To allow specific pipelines to deploy to an environment or a specific resource, click + and choose from the list of pipelines.

This is the basic YAML CICD pipeline without any environment resources such as VM’s or Kubernetes pods. But that said, we can also do the same way for the other resources as well.

Related/References

Next Task For You

Begin your journey towards becoming a Microsoft [AZ-400] Certified Azure DevOps Engineer and earning a lot more in 2020 by joining our FREE Masterclass.

Click on the image below to Register for the Free Masterclass Now!Masterclass AZ-400

The post Azure DevOps Environments | How To Setup DevOps Environment | Approval Checks | Azure DevOps Pipeline appeared first on Oracle Trainings.

1Z0-1088-20 | Oracle Cloud Infrastructure 2020 Enterprise Workloads Associate

$
0
0

What Is [1Z0-1088-20] Enterprise Workloads Associate

The Oracle Cloud Infrastructure 2020 Enterprise Workloads Associate (1Z0-1088-20) is designed to develop solution expertise in deploying Oracle enterprise workloads on OCI. This certification validates understanding of specific OCI Solutions, such as, but not limited to Database Migration, WebLogic Applications, Oracle Applications Unlimited, Data Warehouse, and Analytics. This certification is recommended to all professionals who have attained the OCI Architect Associate level certification or have gained an equivalent hands-on job or industry experience.

Pre-requisite For 1Z0-1088-20

You need to clear OCI Architect Associate level certification before enrolling in Oracle Cloud Infrastructure 2020 Enterprise Workloads Associate (1Z0-1088-20)

Note: This certification comes under the Cloud Recertification policy

Exam Details (1Z0-1088-20)

  • Exam Title: Oracle Cloud Infrastructure 2020 Enterprise Workloads Associate
  • Exam Number: 1Z0-1088-20
  • Exam Price: $150.00
  • Format: Multiple Choice
  • Duration: 85 Minutes
  • Number of Questions: 55
  • Passing score: 68%
  • Validated Against: This exam has been validated against Oracle Cloud Infrastructure 2020

Exam Syllabus

  • Introduction to Oracle Enterprise Solutions
    • Fundamentals of building solutions
    • Working with reference architectures, design patterns and blueprints
    • Design scalable and elastic solutions for high availability and disaster recovery
    • Understand validated, supported, certified solutions
  • Database Migration Techniques
    • Comprehend constraints for database migration
    • List all the tools and methods for migration and articulate the trade-offs and advantages
    • Define the right migration strategy and implement with the tools provided
  • Migrate WebLogic Applications
    • Understanding WebLogic Application Concepts, OCI provisioning options
    • Migration options and Implementation process
  • Migrate E-business Suite to OCI
    • Value drivers for E-Business suite migration, key solution attributes
    • OCI services, architectural guidelines for EBusiness suite on OCI, performance, optimization
    • Understanding customer operating goals, architectural goals, pros/cons of tools
  • Migrate JD Edwards to OCI
    • Value drivers for JDE migration, key solution attributes
    • OCI services, architectural guidelines for EBusiness suite on OCI, performance, optimization
    • Understanding customer operating goals, architectural goals, pros/cons of tools
  • Migrate PeopleSoft to OCI
    • Value drivers for PeopleSoft migration, key solution attributes
    • OCI services, architectural guidelines for EBusiness suite on OCI, performance, optimization
    • Understanding customer operating goals, architectural goals, pros/cons of tools
  • Migrate Siebel to OCI
    • OCI services, architectural guidelines for Siebel on OCI, performance, optimization
    • Understanding customer operating goals, architectural goals, pros/cons of tools
  • Migrate Hyperion to OCI
    • Value drivers for Hyperion migration, key solution attributes
    • OCI services, architectural guidelines for Hyperion on OCI, performance, optimization
  • Data Warehouse and Analytics
    • Understanding Data Warehouse & Analytics Solutions Components
    • Choosing the right tools for creating an LOB Data Mart

Registration

Register for the exam at Oracle’s official website i.e. Oracle Cloud Infrastructure 2020 Enterprise Workloads Associate

The post 1Z0-1088-20 | Oracle Cloud Infrastructure 2020 Enterprise Workloads Associate appeared first on Oracle Trainings.

Viewing all 1898 articles
Browse latest View live