Quantcast
Channel: Cloud Training Program
Viewing all 1887 articles
Browse latest View live

Big Data & Hadoop Architecture, Components & Overview

$
0
0

This post covers Big Data & Hadoop Overview, Concepts, Architecture, including Hadoop Distributed File System (HDFS). 

This post is for beginners who are just starting to learn Hadoop/Big Data and covers some of the very basic questions like What is Big Data, How is Haddop related to Big Data. 

 In next follow up post, I’ll cover what thing you must learn & roadmap to Start learning Big Data & Hadoop.

What is Big DATA?

What is BIG DATA

  • Big data is a term described for a huge amount of Data, both Structured & Un-Structured.
  • Structured Data refers to highly organised data like in Relational Databases (RDBMS) where infomraiton is stored in Tables (rows & columns). Structured Data is easy to search for end user or search engines.
  • Unstructured Data, in contrast, doesn’t fit into traditional rows & columns structure of RDBMS. Example of unstructured data is emailvideos, audio files, web pages, social media messages etc

 

Characteristics of Big Data

Big Data can be defined by following characteristics

  • Volume: The Amount of Data matters in Big Data. You’ll deal with huge amount of unstructured data that is low in density. The volume of data may range from Terabytes to Petabytes. Size determines if data can be considered as Big Data or Not.
  • Velocity: is the fast rate at which data is received and processed. Big Data is often available in real time.
  • Variety: refers to many types of data that are available, both structured & unstructured i.e. Audio, Video, Text Messages, Images. This helps to a person who analyses big data to make a meaningful result.
  • Veracity: refers to Data Quality of captured information, affecting the accurate analysis.

huge collection of data sets that can’t be stored in a single machine. Big data is huge-volumefast-velocity, and different variety information assets that demand innovative platform for enhanced insights and decision making.

 

The Problem (Big Data) & Solution (Hadoop)

Big Data is massive, poorly or less structured, unwieldy data beyond the petabyte. This data is not able to understand by a human in full context.

Hadoop is the most popular and in-demand Big Data tool that solves problems related to Big Data.

Here is the timeline for Hadoop from Apache Software Foundation

What is Hadoop?

Hadoop is an open-source framework that allows to store and process big data in a distributed environment across clusters of computers using simple programming models.

Hadoop is designed to scale up from single servers to thousands of machines, each offering local computation and storage.

What is Hadoop

 

Hadoop Components

Hadoop has three main components Hadoop Distributed File System (HDFS), Hadoop MapReduce and Hadoop Yarn 

  • A) Data Storage -> Hadoop Distributed File System (HDFS): A distributed file system that provides high-throughput access to application data.
  • B) Data Processing-> Hadoop MapReduce: This is a YARN-based system for parallel processing of large datasets. The term MapReduce actually refers to the following two different tasks that Hadoop programs perform:
    • The Map Task: This is the first task, which takes input data and converts it into a set of data, where individual elements are broken down into tuples (key/value pairs).
    • The Reduce Task: This task takes the output from a map task as input and combines those data tuples into a smaller set of tuples. The reduce task is always performed after the map task
  • C) Scheduling & Resource Manageemnt-> Hadoop YARN: This is a framework for job scheduling and cluster resource management.

 

HDFS Architecture

HDFS ArchitectureThe main components of HDFS are NameNode and DataNode.

  • NameNode: It is the master daemon that maintains and manages the DataNodes (slave nodes). It records the metadata of all the files stored in the cluster, e.g. location of blocks stored, the size of the files, permissions, hierarchy, etc.
  • DataNode: These are slave daemons which run on each slave machine. The actual data is stored on DataNodes. They are responsible for serving read and write requests from the clients. They are also responsible for creating blocks, deleting blocks and replicating the same based on the decisions taken by the NameNode.

This post is from our Big Data Hadoop Administration Training, in which we cover HDFS Overview & Architecture, Cloudera, Hive, Spark, Cluster Maintenance, Security, YARN and much more.

If you are looking for commonly asked interview questions for Big Data Hadoop Administration then just click below and get that in your inbox or join our Private Facebook Group dedicated to Big Data Hadoop Members Only.

In next follow up post, I’ll cover what thing you must learn & roadmap to Start learning Big Data & Hadoop.

References & Related

Big Data Hadoop IQ Guide Banner Image

The post Big Data & Hadoop Architecture, Components & Overview appeared first on Oracle Trainings.


Oracle PaaS Offerings on Oracle Cloud Infrastructure (OCI)

$
0
0

This post covers various Oracle PaaS Offerings as of June 2018 on Oracle Cloud Infrastructure (OCI)

Note: OCI is offering from of IaaS Service model (other 2 Cloud Service models are SaaS & PaaS), where OCI is re-branding of Bare Metal Cloud Service (BMCS).

 

Oracle PaaS Offerings on OCI:

1.JCS: Java Cloud Service ( consists of WebLogic, Coherence & OTD on Cloud as PaaS offering)

2. DBCS: Database Cloud Service (Oracle Database on Oracle Cloud as PaaS offering). Check more on DBCS Offerings & Certification Exam for DBAs on Cloud (1Z0-160), check here and here

3. BDCS: Oracle Big Data Cloud Service Compute Edition (Apache Hadoop & Apache Spark)

4. DHCS: Data Hub Cloud Service : No SQL Database for Native Cloud Applications

5. Oracle MySQL Cloud Service: MySQL Server

6. EHCS: Oracle Event Hub Cloud Service ( Apache Kafka )

7. SOA CS: Oracle SOA Cloud Service, deploying Oracle SOA on Cloud as PaaS offering

8. ACCS: Application Container Cloud Service: Container Development Environment running NodeJS, Java, PHP, NoSql, Python, Ruby, GO, .NET Applications

9. ADWC: Oracle Autonomous Dataware House Cloud Service

10. AVBCS: Oracle Autonomous Visual Builder Cloud Service to create web and mobile application development environment using visual builder using browser

11. AICS – Oracle Autonomous Integration Cloud Service for Integrating On-Premise and SaaS-based Cloud Applications

12. AACS – Autonomous Analytics Cloud Service, Analytics Solution in the Cloud

 

Note: If you are new to OCI, then join our FREE 90 Minutes Mastercall on getting started with Oracle Cloud (OCI) for Architects What, Why & How.

Check how one of above services like Java Cloud Service (JCS be deployed on OCI) – watch from 2:15

What’s updated for PaaS on OCI in May 2018

Next Task

The post Oracle PaaS Offerings on Oracle Cloud Infrastructure (OCI) appeared first on Oracle Trainings.

Big Data Hadoop Administration: Step by Step Activity Guides

$
0
0

Big data Hadoop skills are in high demand nowadays. For those you are new to this term, Big data means really a big data, it is a collection of large datasets that cannot be processed using traditional computing techniques and Hadoop is a software framework for storing and processing Big data. It is an open source tool build on Java platform and provides massive storage for any kind of data, enormous processing power and the ability to handle virtually limitless concurrent tasks or jobs.

So if you are planning your future in Big data Hadoop then you must be aware of the terms like Hadoop Distributed File System(HDFS), Cloudera Manager, Hive & Impala, Spark Architecture, Cluster Maintenance, Security, YARN etc. This post covers Hands-On Guides that you must perform to learn & become expert in BigData Hadoop Administration.

 1. Activity Guide I: Cloudera Manager Installation

First of all, you should be aware of how to Install and Configure Cloudera Manager.

Cloudera Manager automates the installation and configuration of CDH and managed services on a cluster, requiring only that you have root SSH access to your cluster’s hosts, and access to the internet or a local repository with installation files for all these hosts. Cloudera Manager installation software consists of:

  • A small self-executing Cloudera Manager installation program to install the Cloudera Manager Server and other packages in preparation for host installation.
  • Cloudera Manager wizard for automating CDH and managed service installation and configuration on the cluster hosts. Cloudera Manager provides two methods for installing CDH and managed services: traditional packages (RPMs or Debian packages) or parcels. Parcels simplify the installation process and more importantly allows you to download, distribute, and activate new minor versions of CDH and managed services from within Cloudera Manager

The following illustrates a sample installation:

Cloudera Manager Installation

2. Activity Guide II: Cloudera Manager Console

Once you have gone through the installation process of Cloudera Manager, then you are ready to use & access the Cloudera Manager Console.

Cloudera Manager Admin Console is the web-based UI that you use to configure, manage, and monitor CDH.

If there are no services configured when you log into the Cloudera Manager Admin Console, the Cloudera Manager installation wizard displays. If services have been configured, the Cloudera Manager top navigation bar and Homepage display. In addition to a link to the Home page, the Cloudera Manager Admin Console top navigation bar provides the following features:

Cloudera Manager Console

3. Activity Guide III: Hive & Impala Flow & Logs

In this Activity Guide, You will get to learn the Process Flow and Logs of Hive & Impala.

A major Impala goal is to make SQL-on-Hadoop operations fast and efficient enough to appeal to new categories of users and open up Hadoop to new types of use cases. Where practical, it makes use of existing Apache Hive infrastructure that many Hadoop users already have in place to perform long-running, batch-oriented SQL queries.

In particular, Impala keeps its table definitions in a traditional MySQL or PostgreSQL database known as the Metastore, the same database where Hive keeps this type of data. Thus, Impala can access tables defined or loaded by Hive, as long as all columns use Impala-supported data types, file formats, and compression codecs.

4. Activity Guide IV: Spark Architecture & Process Flow

The next task is to learn & Understand the Concept of Spark Architecture. In this Activity Guide you will learn about Spark components, the process flow of getting started with a Spark JOB,& how we Troubleshooting Spark Job.

Apache Spark is an open source, general-purpose distributed computing engine used for processing and analyzing a large amount of data. Just like Hadoop MapReduce, it also works with the system to distribute data across the cluster and process the data in parallel.

Apache Spark is considered as a powerful complement to Hadoop, big data’s original technology of choice. Spark is a more accessible, powerful and capable big data tool for tackling various big data challenges.

Spark Architecture

5. Activity Guide V: Data Ingestion Using Sqoop & Kafka

The Next topic is the introduction on Sqoop & Kafka, these tools are used for Data Ingestion from other external sources.

  • Sqoop: A common ingestion tool that is used to import data into Hadoop from any RDBMS. Sqoop provides an extensible Java-based framework that can be used to develop new Sqoop drivers to be used for importing data into Hadoop. Sqoop runs on a MapReduce framework on Hadoop, and can also be used to export data from Hadoop to relational databases.
  • Kafka: Kafka is a highly scalable messaging system that efficiently stores messages on disk partitions in a Kafka topic. Producers publish messages as Kafka topics, and Kafka consumers consume them as they please.

6. Activity Guide VI: Oozie & How it Works in Scheduling the JOBS.

The next activity guide is to get the understanding on Oozie and the Job scheduler.

CDH, Cloudera’s open-source distribution of Apache Hadoop and related projects, includes a framework called Apache Oozie that can be used to design complex job workflows and coordinate them to occur at regular intervals. In this how-to, you’ll review a simple Oozie coordinator job, and learn how to schedule a recurring job in Hadoop.

7. Activity Guide VII: Cluster Maintenance: Directory Snapshots

In this Activity guide, You will get to know about the Hadoop Clusters and Directory Snapshot to perform the steps for Adding and Removing Cluster Nodes.

Hadoop clusters require a moderate amount of day-to-day care and feeding in order to remain healthy and in optimal working condition. Maintenance tasks are usually performed in response to events: expanding the cluster, dealing with failures or errant jobs, managing logs, or upgrading software in a production environment.

Cloudera Manager supports both HBase and HDFS snapshots:

  • HBase snapshots allow you to create point-in-time backups of tables without making data copies, and with minimal impact on RegionServers. HBase snapshots are supported for clusters running CDH 4.2 or later.
  • HDFS snapshots allow you to create point-in-time backups of directories or the entire filesystem without actually cloning the data. These snapshots appear on the filesystem as read-only directories that can be accessed just like any other ordinary directories. HDFS snapshots are supported for clusters running CDH 5 or later. CDH 4 does not support snapshots for HDFS.

Cloudera Manager enables the creation of snapshot policies that define the directories or tables to be snapshotted, the intervals at which snapshots should be taken, and the number of snapshots that should be kept for each snapshot interval and lets you create, delete and restore snapshots manually with Cloudera Manager.

You can get all these Step by Step Activity Guide including Live Interactive Sessions (Theory) when you register for our Big Data Hadoop Administration Training

If you register for our course, You’ll also get:

  1. Live Instructor-led Online Interactive Sessions
  2. FREE unlimited retake for next 3 Years
  3. FREE On-Job Support for next 3 Years
  4. Training Material (Presentation + Videos) with Hands-on Lab Exercises mentioned
  5. Recording of Live Interactive Session for Lifetime Access
  6. 100% Money Back Guarantee (If you attend sessions, practice and don’t get results, We’ll do full REFUND, check our Refund Policy)

Have queries? Contact us at contact@k21academy.com or if you wish to speak then mail your phone number and country code and a convenient time to speak.

If you are looking for commonly asked interview questions for Big Data Hadoop Administration then just click below and get that in your inbox or join our Private Facebook Group dedicated to Big Data Hadoop Members Only

Big Data Hadoop IQ Guide Banner Image

References & Related

The post Big Data Hadoop Administration: Step by Step Activity Guides appeared first on Oracle Trainings.

Things You Must Know to Start Learning Big Data & Hadoop

$
0
0

This post covers Big Data Hadoop Keypoints & Things you must know to Start learning Big Data & Hadoop. In this post, we have given you the Roadmap for Learning Hadoop as a beginner & who wants to learn Hadoop but don’t know where to start. 

It also covers some of the very basic questions like Why should you go for BigData Hadoop. So, get ready to explore the best way to learn Hadoop!

Confused About What & Where to Learn & How to Make Career in Hadoop?

Now you must be having questions like:

  • How is the career growth in Hadoop?
  • What is the prerequisite to learning Hadoop?
  • From where should I start learning Hadoop?

Then this post is definitely for you and you will find that all your doubts have been answered at the end of this blog post.

Market Stats For BigData & Hadoop

Before we start learning Hadoop for beginners in detail, ask yourself why do you want to learn Hadoop? Is it just because others are running on this track? Will it be helpful in the long run? So, why not to look at the market statistics to evaluate its value. Well, here are the rough stats on Hadoop possibilities.

91% of market leaders rely on customer data to take a business decision. Moreover, they believe that data is a key driver of success in business. With the changed marketing strategy, there is a surge in data generation in all sectors which is estimated almost 90% in the last two years.

The big data market is going to expand worth USD 46 billion by the end of 2018. The annual growth of this will be approximately 23% by the end of 2019. There is a considerable gap between the ongoing demand for right skilled big data resource and supply.

Hence, there is an ongoing job opportunity in big data domain for Hadoop professionals indeed.

Big Data market has grown a lot in the last few years, and it is overgrowing. The job market in Big Data field will grow tremendously in the next couple of years. This growth will be seen in all the big data jobs. So, if you choose to learn Big Data, you will have a number of job opportunities to build a Big Data career.

The yearly demand for data engineers, data scientists, and data developers will increase up to 700,000 new job postings by the year 2020. According to IBM, the demand for data scientists will grow by 28% by the year 2020. Also, the jobs in US market will increase in number by 364,000.

What Things to Learn & in What Order?

No major prerequisites required for taking this Hadoop administration training.

  • Having a basic knowledge of Linux can help because Hadoop runs on Linux.
  • Basics of Memory, CPU, OS, Storage, and Networks

Know the purpose of learning Hadoop!

Before you proceed to learn Hadoop as a beginner, stop for a while and think why Hadoop is so popular and its usability in the technology market. This will help you to understand the core idea behind Hadoop’s functionalities.

To achieve this:

Who & Why Should Someone Learn Hadoop?

  • Hadoop Administration is not restricted to a particular field in IT.
  • An array of professionals such as Java developers, system admins, storage admins, DBAs, Software Architects, Data Warehouse Professionals, IT Managers, Software Developers and students interested in Hadoop cluster administration can benefit from this course
  • Hadoop is the most important framework for working with Big Data in a distributed environment. Due to the rapid deluge of Big Data and the need for real-time insights from huge volumes of data, the job of the Hadoop administrator is critical to large organizations. Hence there is a huge demand for professionals with the right skills and certification
  • Global Hadoop Market to Reach $84.6 Billion by 2021 – Allied Market Research
  • Shortage of 1.4 -1.9 million Big Data Hadoop Analysts in the US alone by 2018– Mckinsey
  • Hadoop Administrator in the US can get a salary of $120,000

Can I have a Career with Big Data and Hadoop?

Big Data is something which will get bigger day by day so advancements in big data technology will not cease but Hadoop is a must know skill in the current scenario as it is the nucleus of Big Data solutions for many enterprises and new technologies like Spark have evolved around Hadoop.

So learning Big Data technology like Hadoop or Spark is always going to be beneficial in long term too.

  • As long as the Data is growing and so storing and processing is handled.
  • You can definitely have a career in Big Data and Hadoop.
  • There are few solutions available in the market. But, nothing matches the way Hadoop does it.
  • We also believe that it is still a toddler.
  • It has a long way to go as IOT and other new technologies are even generating even more and more data for analysis and for a better understanding of Humans, machines, automobiles and all other stuff.

You will get to know all of this and deep-dive into each concept related to BigData & Hadoop, once you will get enrolled in our Big Data Hadoop Administration Training

Another question, which might come to your mind, What are all the things you will get when you enrolled!!

We are glad to tell you that:

Things you will get!!

  1. Live Instructor-led Online Interactive Sessions
  2. FREE unlimited retake for next 3 Years
  3. FREE On-Job Support for next 3 Years
  4. Training Material (Presentation + Videos) with Hands-on Lab Exercises mentioned
  5. Recording of Live Interactive Session for Lifetime Access
  6. 100% Money Back Guarantee (If you attend sessions, practice and don’t get results, We’ll do full REFUND, check our Refund Policy)

If you are looking for commonly asked interview questions for Big Data Hadoop Administration then just click below and get that in your inbox or join our Private Facebook Group dedicated to Big Data Hadoop Members Only.

Big Data Hadoop IQ Guide Banner Image

References & Related

The post Things You Must Know to Start Learning Big Data & Hadoop appeared first on Oracle Trainings.

Oracle Access Manager 12C: RCU & Configure Domain (12.2.1.3.0) [Part2]

$
0
0

This post is the part 2 of step by step installation of Oracle Access Manager (OAM) which covers creating a domain for Oracle Access Manager 12.2.1.3.0.

For Part I, Download Software and create Schema click here 

RCU – Repository Creation Utility is a java based tool (available only for Windows and Linux) to create a schema in Database. For basics on RCU (Repository Creation Utility) click here

  • You use sys account or any user with sysdba privileges to database
  • RCU version is minimum 12.2.1.3.0

Run the Repository Creation Utility

Note: Make sure that your Listener is up and Running and  CDB (Container Database) and PDB (Pluggable Database) database is up and running if not start the database by following the steps:

Start Database

1. Login as user oracle, when prompted enter the password as Welcome1

2. Set Environment variable for database (we have used Given Directories in our environment )

export ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1
export ORACLE_SID=iam
export PATH=$ORACLE_HOME/bin:$PATH

3. Start Database Listener
lsnrctl start

4. Start Database
sqlplus “/as sysdba”

SQL> startup

5. Now, to open the PDB database use the below command
SQL> alter pluggable database all open;

Create OAM Schema Using RCU

Note: In OAM 12c there is no need to download the software for RCU Separately. The .rcu File will create where we have our Oracle_Home.

1. Launch a terminal window as Oracle User and run the .rcu to create the schema for OAM. In my case, .rcu is located in the /u01/app/oracle/oam12c/oracle_common/bin directory.

2. On Welcome page click Next

3. On Create Repository screen select Create Repository and click Next

4. On Database Connection Details screen enter the following and click Next

  • Database Type: Oracle Database
  • Host Name: 1805oam07.k21academy.com

Note: Here change the hostname of your VM that you get by typing hostname

  • Port: 1521
  • Service Name: pdbiam.k21academy.com (If you want to check your service name, check the listener start with  command lsnrctl status)
  • Username: sys
  • Password: Welcome1
  • Role: SYSDBA

5. On Repository Creation Utility – Checking Prerequisites screen click Ok

6. On Select Components screen select the following and click Next

Create a new Prefix: OAM

Component: Oracle Acess Manager

Note: Once you select Oracle Access Manager As a Repository Component then all dependent components will be automatically selected

7. On Repository Creation Utility – Checking Prerequisites click Ok

8. On Schema Password screen select Use the same password for all schemas and click Next

  • Password: Welcome1
  • Confirm Password: Welcome1

9. On Map Tablespaces screen use default and click Next

10. On Repository Creation Utility – Creating Tablespaces screen click Ok

11. On Summary screen click create

12. On Complete Summary screen, Click Close.

 

Oracle Access Management 12c: Domain Configuration

Launch a terminal window as Oracle and enter the following command:

Start configuring domain by running config.sh under $ORACLE_HOME/common/bin (where ORACLE_HOME is for OAM in Part I of this Series) :

1. On Configuration Type, screen select Create a new domain and click Next

2. On Templates screen, select Create Domain Using Product Templates

  • Select the following templates:
  • Oracle Access Management Suite – 12.2.1.3.0[idm]

Note: Once you select idm As a template then all dependent components will be automatically selected

3. On Application Location screen, Click Next.

4. On Administrator Account enter the following

  • Password: Welcome1
  • Confirm Password: Welcome1

Click Next

5. On Domain Mode select Production and Under JDK select Oracle Hotspot 1.8.0_151 /usr/java/jdk1.8.0_151 and click Next

6. On Database Configuration Type screen select RCU Data and enter the following

  • Hostname: 1805oam07.k21academy.com (Make sure you put your hostname)
  • DBMS/Services: pdbiam.k21academy.com (check your service name with listener status)
  • Port: 1521
  • Schema Owner: OAM_STB
  • Schema Password: Welcome1

 Now, click on Get RCU Configuration, once it’s done successfully then click Next

7. On Component Datasources screen leave default and click Next

8. On JDBC Test screen, Once the test connections were successfully done then click Next

9. On Advanced Configuration screen leave default and click Next

10. On Configuration Summary screen click Create

11. Once it’s done Successfully then click Next

12. On End of Configuration screen click Finish

Note: Make a note of those Domain Location and Admin Server URL

OAM Start & Stop Server (Admin and Managed)

  • Next is to start & Stop OAM Admin server & Managed server

Related Posts

 Next Task For You

If you are looking for commonly asked interview questions for Oracle Access Manager then just click here and get that in your inbox.

If you like this and would like to learn Oracle Identity & Access Management then get a FREE Mini-Course on IDM/OAM/OID by just one click here.

The post Oracle Access Manager 12C: RCU & Configure Domain (12.2.1.3.0) [Part2] appeared first on Oracle Trainings.

Using CloudBerry Tool To Backup on Oracle Cloud

$
0
0

This post covers how to Configure Oracle Cloud with CloudBerry Backup Tool, a powerful solution that allows you to automate your backup & restore process.

Things Good to Know About CloudBerry Tool

  • CloudBerry Lab is a software company that develops online backup and file management solutions integrated with more than 20 cloud storage providers. CloudBerry Backup and CloudBerry Explorer are offered for personal use in a “freemium model”
  • 256-bit AES encryption
  • Image-based backup
  • Support for many different platforms
  • Local backup, cloud backup, and direct cloud-to-cloud backup
  • Restore images as VMs in the cloud (OracleCloud, Amazon EC2 or Azure VM)
  • Bare metal restore
  • Restore backups to new and different hardware
  • Data deduplication

Connecting Your Oracle Storage to CloudBerry Tool to Take Backup’s

Download and Install CloudBerry Backup

You can find the latest version of CloudBerry Backup on the website: CloudBerry Backup for Windows

Configure CloudBerry Backup for Oracle Cloud Storage

Launch CloudBerry Backup, click on the Menu Icon in the upper-left corner of the screen and click Add New Account

Backup Options Available

  • Local to Cloud
  • Cloud to Local
  • Cloud to Cloud

In the list of available cloud storage providers, click on OracleCloud icon

In the dialog box, specify the account settings:

– Display name: It can be any name you want.

– User Name: “Storage – Identity Domain: Oracle Cloud User Name”. For example, “Storage-amitk21:amit.pancholia@gmail.com”

– Api key: This is a password for your Oracle Cloud account.

– Authentication Service: It is put automatically once you choose your account location from the drop-down menu

Note: You can create Container during the time of configuration or you can select the container which you created previously. A container is nothing but a object storage where you can put your backup on Oracle Cloud.

Once you specified all the required settings, click OK. You will see your new Oracle account displayed in the list of registered accounts.

How to Select Oracle Account in Backup Wizard

You can select or create a new Oracle Cloud Storage account in Backup Wizard when creating a new backup plan.

Click Files button to start Backup Wizard and select your new Oracle account from the list of registered accounts.

Once your Oracle account is selected, click Next and complete the rest of the Backup Wizard steps to create and customize your backup plan!

Note: This Overview of  CloudBerry on Oracle Cloud  is from our “Oracle Cloud DBA 6 Weeks Step by Step Training Program” with 3 Years On-Job Support and Unlimited FREE Retakes (If you need to know more about this program then reach out to our team at contact@k21academy.com )

Related/Further Reading

If you are just starting out in Cloud then I highly recommend you to go through these posts first

Did You Start Your Cloud Journey?   

Get a FREE Copy in your Inbox with Steps to Register for Oracle Cloud and get 300 USD FEE Credit to Practice and Join our Private Closed Facebook Group for Oracle Cloud Community

The post Using CloudBerry Tool To Backup on Oracle Cloud appeared first on Oracle Trainings.

Exadata Overview & Architecture

$
0
0

This post covers Exadata Overview, Concepts, Architecture, including  Migrating Database to the Engineered system (Exadata)

This post is for beginners  as well as for experienced such as DBA’s who are just starting to learn Exadata and covers some of the very basic questions like What is Exadata, its Architecture, how to prepare the things before migrating database to the Engineered system (Exadata)

In next follow up post, I’ll cover what thing you must learn & roadmap to Start Exadata.

About Exadata

Exadata is a database appliance designed by Oracle that has the competency to provide support to a combination of database systems such as OLTP (On-line Transaction Processing) and OLAP (Online Analytical Processing), the transactional and analytical database systems respectively. Exadata offers its users an enhanced functionality relating to enterprise-class databases and their associated workloads.

Why Exadata? 

The Oracle Exadata Database Machine is the World’s most secure database machine. It is engineered to be the top performing and best available platform for running the Oracle Database. This simple and fast to implement machine protects and powers your most important database and is the perfect foundation for a consolidated database cloud

 When to use Exadata? 

Oracle Exadata is best for customers who consider data warehousing and consolidation of the database. Enterprises targeting faster growth and striving to become more responsive to market fluctuations and client requirements must implement pioneering technologies like Oracle Exadata.

Architecture of Exadata Machine

Exadata Architecture

Component of Exadata Storage Server

It’s another storage device, which has CUP, Memory, Disks, network cards, an Operating System Oracle Linux and Most important past Exadata Storage Server Software. There are mainly three services which run on cell server for processing which are:

CellSRV (Cell Service): This is the primary component of Exadata Cell which provides all Exadata Storage services. This process communicates with the Database server for providing database blocks

MS (Management Server): MS service provides an interface to DBA for communicating or Managing Cell Server.CellCLI is the Command Line tool which Exadata DBA used for performing Exadata Administration services.

RS (Restart Server): This service is to make sure the functioning of Exadata Server. RS monitors CellSRV and MS for liveliness and restarts them whenever required.

Four Main Features of Exadata Storage Server

1.Smart Flash Cache Intelligent Caching: Storage servers usually have two kinds of storage Hard disk and Flash Cache. Flash Cache is fast storage device used to keep most frequently accessed data for faster access, whenever there is a requirement of a block it is first looked into Flash Cache and if available returned to the server without going to Hard disk for reading the block. This is a common feature in Storage servers.

2.Hybrid Columnar Compression (HCC): Compression is used for reducing storage consumption for large databases. In this technique, a logical unit called Compression unit is used to store Hybrid columnar Compressed Rows. At the time of data loading Column values are detached from rows, grouped and then compressed. After compression, this is fit into the compression unit

3.Smart Scan Processing: In Exadata storage server has got intelligence to filter data at the storage level, rather than transferring it to the database server.
Exadata Smart Scan works filtering for the following kind of queries:

  • Predicate Filter: In a query with where clause only blocks with satisfying where clause condition will only go to the database server, not the entire table.
  • Column Filtering: Suppose in a select query like “select name, age from employee” is execute on DB server by DBA, then only columns name and age data will send to Database Server not all columns of tables.
  •  Join Processing: All join processing are done at Storage level itself, so only filtered data is sent to DB server.

4. I/O Resource Management: In a traditional Database environment, If you have more than one database running on a Shared storage server. Large queries from one database can use more resource and cause a performance issue for other databases. Another case a batch job has started in one DB by a DBA which case performance trouble into OLTP Database.

Managing Exadata Machine

Some important utilities used in managing Exadata machine

CellCLI – Let’s move on the next stack in the software: the Exadata Storage Server. To manage this, Oracle provides a command line tool: CellCLI (Cell Command Line Interpreter). All the cell-related commands are entered through the CellCLI.

DCLI – The scope of the CellCLI command is the cell where it is run, not in other cells. Sometimes you may want to execute a command across multiple cells from one command prompt, e.g. shutting down multiple nodes. There is another command line tool for that: DCLI.

SQL – Once the cell disks are made available to the database nodes, the rest of the work is similar to what happens in a typical Oracle RAC database, in the language you use every day: SQL. SQL*Plus is an interface many DBAs use. You can also use other interfaces such as Oracle SQL Developer. If you have Grid Control, there are lots of commands you don’t even need to remember; they will be GUI based.

ASMCMD – ASMCMD this is the command line interface for managing ASM resources like diskgroups, backups, etc.

SRVCTL – SRVTCL is a command-line interface to manage Oracle Database 11.2 RAC Clusters. At the database level, most of the commands related to cluster, e.g. starting/stopping cluster resources, checking for status, etc. can be done through this interface.

CRSCTL – CRSCTL is another tool to manage clusters. As of 11.2, the need to use this tool has dwindled to near zero. But there is at least one command in this category.

Managing Exadata Machine

Migrating Database to Engineered systems like Exadata

Migration Preparation is Essential:

                                                     

Prepare Source Database            Prepare Exadata System:

Database upgrade to 11.2                        ASM Configuration

Hardware upgrade                                    Install latest versions

Drop unnecessary schema objects        Review Exadata Critical Issues

Note: There are many ways to migrate to Exadata the “best” way depends on your Environment and goal.

This post is from our Exadata Training, in which we cover  Exadata Overview & Architecture, Exadata Storage Server Configuration, Resource Management, Optimizing Database Performance & much more.

If you are looking for commonly asked interview questions for Exadata Administration then just click below and get that in your inbox or join our Private Facebook Group dedicated to Exadata Members Only.

In next follow up post, I’ll cover what thing you must learn & roadmap to Start learning Exadata.

Next Task For You

Click on the image below to download your FREE Guide, 5 Key Exadata Docs, Every Oracle DBA Must Read, & Learn More.

Oracle Exadata Banner Image

References

MOS 888828.1(Exadata Database Machine and Exadata Storage Server Supported Versions)

MOS1270094.1 (Exadata Critical Issues)

 

The post Exadata Overview & Architecture appeared first on Oracle Trainings.

All About 1Z0-932 – Oracle Cloud Infrastructure 2018 Architect Associate

$
0
0

Folks,

I am happy to share that I have cleared [1Z0-932] Oracle Cloud Infrastructure Architect Associate exam with good percentage.

Oracle Certified Associate (1Z0-932)There were 70 questions and 105 mins were allocated and most of the questions were based on the designing architecture of various infrastructure components using OCI resources.  If you have clear understanding and scope of various services it is easy to answer the questions and clear the exam.

Going through the OCI exam documentation is must but to make it permanent in your memory it is important to perform the activities as many concepts will get clear when you perform the stuff.

I started my preparation with the Practice Exam mentioned on the Oracle Site for OCI and was able to score 50% on the first attempt. This gave me the idea on the gaps and then with Oracle Documentation and Youtube videos helped me to scale up on OCI skills.

Important points to keep in mind:

  • Go through the course content with the scope and weight of each topic thoroughly. Don’t leave any topic (Basic and Advanced) click here

Go through these topics first: IAM, Compute, Storage, VCN as this will be base for all other topics.

  • What are services available and what are offerings of OCI should be at your tips.
  • Designing VCN, Compute and which storage is suitable in which scenario is what you need to understand.
  • Oracle Free Account is sufficient to understand most of the aspects of OCI.
  • A clear understanding of OCI, OCI-C, and C@C
  • It’s good that Oracle gave weight on the Automation of Infrastructure via Terraform. Preparing it in a flow chart way was helpful. I was able to recall it easily. Note all the minute details. Better to check one example and do it.
  • As Database is one of the Oracle Core Product so expect sufficient questions on the same.
  • Time Management is the most important thing you should be careful during the exam as it takes time to understand the scenarios.

You can refer below resources for understanding the concepts :

That’s all from my side if you want to discuss anything on the same feel free to post your queries.

This post is from our OCI Training, in which we cover all the topics required to clear your 1Z0-932 certification in both Theoretical and Practical Approach, where I and Atul will be guiding you throughout the training.

If you have any doubts please reach out to us at contact@k21academy.com

If you are looking for commonly asked interview questions for Exadata Administration then just click below and get that in your inbox or join our Private Facebook Group dedicated to Exadata Members Only.

Next Task

The post All About 1Z0-932 – Oracle Cloud Infrastructure 2018 Architect Associate appeared first on Oracle Trainings.


[Video] Oracle Identity and Access Management: Oracle Identity Federation

$
0
0

In this video, we are going to look at Oracle Identity Federation (OIF) which is a part of Oracle Identity and Access Management (IDAM).

OIF is an authentication process across domains. For example, you work in a company and that company has an application hosted somewhere else like salesforce (a cloud application) or that company has tied up a travel agency site or another company that provides travel in terms of booking the flights or hotels so SSO between that service provider and identity provider is a federation part. So this authentication across two different companies or enterprises or domains is called as identity federation. To know in detail about OIF, go through the video below.

Federation has some standards and below are different Protocols supported by Federation.
1. SAML V1 and V2: SAML (Security Assertion Markup Language)
2. Liberty
3. OpenID
4. OAuth

In identity federation, there are always two parties:
1. Identity Provider: identity provider is the enterprise or the domain or the company where users reside.
2. Service Provider: Service provider is from the domain or organization that provides that Service.

Before the federation can happen between these two parties, these two parties must trust each other and that is a part of the federation configuration. There is a metadata which will have the details about service provider or another metadata for the identity provider and they are going to exchange the metadata with each other so that they can trust each other.

So now whats going to happen:

  1. When the user tries to access service provider, the Service provider will check who is the identity provider for this application.
  2. Service provider will redirect user to the identity provider and on the identity provider side, user will log in and user has already an account in Identity provider side so user will type their username and password and on typing the username and password, the identity provider will create a SAML token (if you are using SAML Protocol).
  3. So a token will be created by the Identity provider and will be sent to the service provider. Because there is already a trust between Service provider and identity provider, Service provider will check if this token is issued by the identity provider whom I already trust and then service provider will grant access to that application or it can retrieve identity from that SAML Token.

So this is all in nutshell about Oracle Identity Federation. Please go through the video to know in detail.

We cover this in one of the module of our Oracle Access Manager Training. We cover a lot of other topics like OAM, FMW and WebLogic concepts, OID, OHS, OAM integration with other oracle products, Cloning, HA, DR and much more. Please check our Step by Step Activity Guide You Must Perform to become Expert in IDM to see what all things we cover in this training.

Did You Find this Video useful?

Leave a Comment.

Related Posts

 Next Task For You

If you are looking for commonly asked interview questions for Oracle Access Manager then just click here and get that in your inbox.

If you like this and would like to learn Oracle Identity & Access Management then get a FREE Mini-Course on IDM/OAM/OID by just one click here.

The post [Video] Oracle Identity and Access Management: Oracle Identity Federation appeared first on Oracle Trainings.

[Video] Oracle Database on Amazon AWS Overview

$
0
0

As technology changes very quickly and evolution is always imminent. This post gives you an Overview of Oracle Database on Amazon AWS.

Amazon AWS is great for providing an IaaS platform and Oracle is well-known for its platform solutions.
Oracle on the AWS platform is a strong choice for customers to consider. The AWS IaaS platform offers a scalable,
secure, highly available and secure infrastructure for Oracle platform solutions such as Database,
Middleware and applications.  Also, Oracle allows customers to choose IaaS solutions for their platform technology.
Interestingly, AWS also offers solid platform solutions on Database on which Oracle Database workloads can be migrated.  There are a lot of options, but they all come with complexity regarding choice,
commercial and contractual positions.

Note: Above Video is a Bonus from our  Cloud DBA  Training
(For List of Step By Step Hands-On Guide
Register for Oracle Cloud Trial Account refer this post).

Things Need T0 Be Consider When Moving  Oracle Database To Amazon AWS:

1.) Flavors available on AWS for Oracle, there are two options:

  • Using Amazon Relational Database Service (Amazon RDS) for Oracle
  • Running a self –managed Oracle Database directly on Amazon Elastic Compute Cloud (Amazon EC2)

 2.) Understanding the technical limitations of your options:

  • Amazon RDS can support up to 6TB and a maximum of 30,000 IOPS
  • Amazon RDS is not suitable for functionalities of managing backups or point in time recoveries
  • AWS does not currently enable Oracle Real Application Clusters (RAC) natively on either Amazon RDS or EC2
  • AWS does allow all major version instances of Oracle products to be deployed on AWS.
  • AWS offers its best of breed solutions for scalability, security and high availability

3.) Understanding the licensing and commercial implications:

  • Except for one option of Amazon RDS for Oracle database (standard one), all other options on AWS requires to Bring Your Own License (BYOL) of Oracle
  • With the recent changes of Oracle licensing policies since April 2017, the BYOL Oracle license requirement on AWS has doubled, however, the customer needs to adhere to Oracle support policy restrictions for such scenarios. Oracle’s own PaaS (DBaaS and other flavors) don’t require BYOL

Choosing The Right Solution (Amazon RDS for Oracle vs Oracle on Amazon EC2)

Amazon RDS for Oracle   

  • AWS recommends RDS first
  •  Focus on tasks that bring value
    to your business
  • Focus on high-level tuning
    tasks and schema optimization
  • Lack of in-house expertise
    managing databases

Oracle on Amazon EC2   

  • You need full control over the DB instances
  • Control over backups, replication, and clustering
  • Use features and options not available in Amazon RDS
  • Size and performance needs exceed Amazon RDS offering

Implementation Diagram of Oracle Database on Amazon AWS

This image gives you the high-level snapshot of deploying Oracle Database on various Services offered by Amazon AWS

Note: This Overview of  Oracle Database  on Amazon AWS  is from our “Oracle Cloud DBA 6 Weeks Step by Step Training Program” with 3 Years On-Job Support and Unlimited FREE Retakes (If you need to know more about this program then reach out to our team at contact@k21academy.com )

References:

Related/Further Reading

If you are just starting out in Cloud then I highly recommend you to go through these posts first

Did You Start Your Cloud Journey?   

Get a FREE Copy in your Inbox with Steps to Register for Oracle Cloud and get 300 USD FEE Credit to Practice and Join our Private Closed Facebook Group for Oracle Cloud Community

The post [Video] Oracle Database on Amazon AWS Overview appeared first on Oracle Trainings.

Big Data Hadoop: Apache Spark Vs Hadoop MapReduce

$
0
0

In this post, we have covered key differences between Apache Spark Vs Hadoop MapReduce.

As we felt that people getting Confused between Apache Spark & Hadoop MapReduce, so we thought of writing this blog and if you go through post completely, you will find all your doubts have been cleared.

If you are just starting out in BigData & Hadoop then I highly recommend you to go through these post first:

  • Big Data Hadoop Keypoints & Things you must know to Start learning Big Data & Hadoop, check here
  • Big Data & Hadoop Overview, Concepts, Architecture, including Hadoop Distributed File System (HDFS), Check here

Key Differences Between Hadoop & Spark

Hadoop and Apache Spark are both big-data frameworks, but they don’t really serve the same purposes. Hadoop is essentially a distributed data infrastructure: It distributes massive data collections across multiple nodes within a cluster of commodity servers, which means you don’t need to buy and maintain expensive custom hardware. Spark, on the other hand, is a data-processing tool that operates on those distributed data collections; it doesn’t do distributed storage

  • Apache Spark: Apache Spark is a general-purpose & lightning fast cluster computing system. It provides high-level API. For example, Java, Scala, Python and R. Apache Spark is a tool for Running Spark Applications. Spark is 100 times faster than Bigdata Hadoop and 10 times faster than accessing data from disk.
  • Hadoop: Hadoop is an open source, Scalable, and Fault-tolerant framework. It efficiently processes large volumes of data on a cluster of commodity hardware. Hadoop is not only a storage system but is a platform for large data storage as well as processing.

Wise Comparison Between Apache Spark & Hadoop:

1. Speed:

  • Apache Spark – Spark is lightning fast cluster computing tool. Apache Spark runs applications up to 100x faster in memory and 10x faster on disk than Hadoop. Because of reducing the number of read/write cycle to disk and storing intermediate data in-memory Spark makes it possible.
  • Hadoop MapReduce – MapReduce reads and writes from disk, as a result, it slows down the processing speed.

2. Difficulty:

  • Apache Spark – Spark is easy to program as it has tons of high-level operators with RDD – Resilient Distributed Dataset.
  • Hadoop MapReduce – In MapReduce, developers need to hand code each and every operation which makes it very difficult to work.

3. Easy to Manage:

  • Apache Spark – Spark is capable of performing batch, interactive and Machine Learning and Streaming all in the same cluster. As a result, makes it a complete data analytics engine. Thus, no need to manage different component for each need. Installing Spark on a cluster will be enough to handle all the requirements.
  • Hadoop MapReduce – As MapReduce only provides the batch engine. Hence, we are dependent on different engines. For example- Storm, Giraph, Impala, etc. for other requirements. So, it is very difficult to manage many components.

4. Real-time analysis:

  • Apache Spark – It can process real-time data i.e. data coming from the real-time event streams at the rate of millions of events per second, e.g. Twitter data for instance or Facebook sharing/posting. Spark’s strength is the ability to process live streams efficiently.
  • Hadoop MapReduce – MapReduce fails when it comes to real-time data processing as it was designed to perform batch processing on voluminous amounts of data.

5. Fault tolerance:

  • Apache Spark – Spark is fault-tolerant. As a result, there is no need to restart the application from scratch in case of any failure.
  • Hadoop MapReduce – Like Apache Spark, MapReduce is also fault-tolerant, so there is no need to restart the application from scratch in case of any failure.

6. Security:

  • Apache Spark – Spark is little less secure in comparison to MapReduce because it supports the only authentication through shared secret password authentication.
  • Hadoop MapReduce – Apache Hadoop MapReduce is more secure because of Kerberos and it also supports Access Control Lists (ACLs) which are a traditional file permission model.

You can use one without the other: Hadoop includes not just a storage component, known as the Hadoop Distributed File System, but also a processing component called MapReduce, so you don’t need Spark to get your processing done. Conversely, you can also use Spark without Hadoop. Spark does not come with its own file management system, though, so it needs to be integrated with one — if not HDFS, then another cloud-based data platform. Spark was designed for Hadoop, however, so many agree they’re better together.

You will get to know all of this and deep-dive into each concept related to BigData & Hadoop, once you will get enrolled in our Big Data Hadoop Administration Training

Another question, which might come to your mind, What are all the things you will get when you enrolled!!

We are glad to tell you that:

Things you will get!!

  1. Live Instructor-led Online Interactive Sessions
  2. FREE unlimited retake for next 3 Years
  3. FREE On-Job Support for next 3 Years
  4. Training Material (Presentation + Videos) with Hands-on Lab Exercises mentioned
  5. Recording of Live Interactive Session for Lifetime Access
  6. 100% Money Back Guarantee (If you attend sessions, practice and don’t get results, We’ll do full REFUND, check our Refund Policy)

If You’ve not looked at Our Big Data Hadoop Administration Workshop & want to check what we cover in the Workshop then check here & Step By Step Hands-On Activity Guide that we cover in Training.

If you are looking for commonly asked interview questions for Big Data Hadoop Administration then just click below and get that in your inbox or join our Private Facebook Group dedicated to Big Data Hadoop Members Only.

Big Data Hadoop IQ Guide Banner Image

The post Big Data Hadoop: Apache Spark Vs Hadoop MapReduce appeared first on Oracle Trainings.

Big Data Hadoop: Apache Spark Vs Hadoop MapReduce

$
0
0

In this post, we have covered key differences between Apache Spark Vs Hadoop MapReduce.

As we felt that people getting Confused between Apache Spark & Hadoop MapReduce, so we thought of writing this blog and if you go through post completely, you will find all your doubts have been cleared.

If you are just starting out in BigData & Hadoop then I highly recommend you to go through these post first:

  • Big Data Hadoop Keypoints & Things you must know to Start learning Big Data & Hadoop, check here
  • Big Data & Hadoop Overview, Concepts, Architecture, including Hadoop Distributed File System (HDFS), Check here

Key Differences Between Hadoop & Spark

Hadoop and Apache Spark are both big-data frameworks, but they don’t really serve the same purposes. Hadoop is essentially a distributed data infrastructure: It distributes massive data collections across multiple nodes within a cluster of commodity servers, which means you don’t need to buy and maintain expensive custom hardware. Spark, on the other hand, is a data-processing tool that operates on those distributed data collections; it doesn’t do distributed storage

  • Apache Spark: Apache Spark is a general-purpose & lightning fast cluster computing system. It provides high-level API. For example, Java, Scala, Python and R. Apache Spark is a tool for Running Spark Applications. Spark is 100 times faster than Bigdata Hadoop and 10 times faster than accessing data from disk.
  • Hadoop: Hadoop is an open source, Scalable, and Fault-tolerant framework. It efficiently processes large volumes of data on a cluster of commodity hardware. Hadoop is not only a storage system but is a platform for large data storage as well as processing.

Wise Comparison Between Apache Spark & Hadoop:

1. Speed:

  • Apache Spark – Spark is lightning fast cluster computing tool. Apache Spark runs applications up to 100x faster in memory and 10x faster on disk than Hadoop. Because of reducing the number of read/write cycle to disk and storing intermediate data in-memory Spark makes it possible.
  • Hadoop MapReduce – MapReduce reads and writes from disk, as a result, it slows down the processing speed.

2. Difficulty:

  • Apache Spark – Spark is easy to program as it has tons of high-level operators with RDD – Resilient Distributed Dataset.
  • Hadoop MapReduce – In MapReduce, developers need to hand code each and every operation which makes it very difficult to work.

3. Easy to Manage:

  • Apache Spark – Spark is capable of performing batch, interactive and Machine Learning and Streaming all in the same cluster. As a result, makes it a complete data analytics engine. Thus, no need to manage different component for each need. Installing Spark on a cluster will be enough to handle all the requirements.
  • Hadoop MapReduce – As MapReduce only provides the batch engine. Hence, we are dependent on different engines. For example- Storm, Giraph, Impala, etc. for other requirements. So, it is very difficult to manage many components.

4. Real-time analysis:

  • Apache Spark – It can process real-time data i.e. data coming from the real-time event streams at the rate of millions of events per second, e.g. Twitter data for instance or Facebook sharing/posting. Spark’s strength is the ability to process live streams efficiently.
  • Hadoop MapReduce – MapReduce fails when it comes to real-time data processing as it was designed to perform batch processing on voluminous amounts of data.

5. Fault tolerance:

  • Apache Spark – Spark is fault-tolerant. As a result, there is no need to restart the application from scratch in case of any failure.
  • Hadoop MapReduce – Like Apache Spark, MapReduce is also fault-tolerant, so there is no need to restart the application from scratch in case of any failure.

6. Security:

  • Apache Spark – Spark is little less secure in comparison to MapReduce because it supports the only authentication through shared secret password authentication.
  • Hadoop MapReduce – Apache Hadoop MapReduce is more secure because of Kerberos and it also supports Access Control Lists (ACLs) which are a traditional file permission model.

You can use one without the other: Hadoop includes not just a storage component, known as the Hadoop Distributed File System, but also a processing component called MapReduce, so you don’t need Spark to get your processing done. Conversely, you can also use Spark without Hadoop. Spark does not come with its own file management system, though, so it needs to be integrated with one — if not HDFS, then another cloud-based data platform. Spark was designed for Hadoop, however, so many agree they’re better together.

Conclusion: Spark is an extension to Hadoop. Though you can run Spark in standalone mode, if Spark is integrated on the top of Hadoop, its processing capabilities speed up with the number commodity hardware running in Hadoop cluster

You will get to know all of this and deep-dive into each concept related to BigData & Hadoop, once you will get enrolled in our Big Data Hadoop Administration Training

Another question, which might come to your mind, What are all the things you will get when you enrolled!!

We are glad to tell you that:

Things you will get!!

  1. Live Instructor-led Online Interactive Sessions
  2. FREE unlimited retake for next 3 Years
  3. FREE On-Job Support for next 3 Years
  4. Training Material (Presentation + Videos) with Hands-on Lab Exercises mentioned
  5. Recording of Live Interactive Session for Lifetime Access
  6. 100% Money Back Guarantee (If you attend sessions, practice and don’t get results, We’ll do full REFUND, check our Refund Policy)

If You’ve not looked at Our Big Data Hadoop Administration Workshop & want to check what we cover in the Workshop then check here & Step By Step Hands-On Activity Guide that we cover in Training.

If you are looking for commonly asked interview questions for Big Data Hadoop Administration then just click below and get that in your inbox or join our Private Facebook Group dedicated to Big Data Hadoop Members Only.

Big Data Hadoop IQ Guide Banner Image

The post Big Data Hadoop: Apache Spark Vs Hadoop MapReduce appeared first on Oracle Trainings.

[Video] Oracle Autonomous Data Warehouse Cloud Service 18c Now on OCI

$
0
0

This post covers the recent update on Oracle Cloud and already creating a buzz on Cloud technology which is “Autonomous Data Warehouse Cloud Service 18c ” on Oracle Cloud Infrastructure (OCI)

As Oracle Cloud keep updating their cloud services and making enhancements on every service they provide, so we thought of writing this post and make you guys keep updated about the changes and Enhancement that Oracle does.

Note: OCI is offering from of IaaS Service model (other 2 Cloud Service models are SaaS & PaaS), where OCI is re-branding of Bare Metal Cloud Service (BMCS).

Build Your First Autonomous Data Warehouse 18c on OCI

In this video, Oracle ACE Atul Kumar has covered how to deploy your first Autonomous Data Warehouse on Oracle Cloud Infrastructure (OCI), whereas OCI is the 2nd generation of Oracle Cloud & OCI-C is the first generation.

Before deep dive into Autonomous Data Warehouse Cloud Service 18c, let us understand the basics.

The database is divided into two types, which is:

OLTP (On-line Transaction Processing):  is characterized by a large number of short online transactions (INSERT, UPDATE, DELETE). The main emphasis for OLTP systems is put on very fast query processing, maintaining data integrity in multi-access environments and an effectiveness measured by a number of transactions per second. In OLTP database there is detailed and current data, and schema used to store transactional databases is the entity model.

OLAP (On-line Analytical Processing | Data Warehouse ): is characterized by a relatively low volume of transactions. Queries are often very complex and involve aggregations. For OLAP systems a response time is an effectiveness measure. OLAP applications are widely used by Data Mining techniques. In OLAP database there is aggregated, historical data, stored in multi-dimensional schemas (usually star schema).

OLTP | OLAP

 

Now the Question arises is,  What is  Oracle Autonomous Data Warehouse?

Oracle Autonomous Data Warehouse Cloud uses applied machine learning to self-tune and automatically optimizes performance while the database is running. It is built on the next generation Oracle Autonomous Database technology using artificial intelligence to deliver unprecedented reliability, performance, and highly elastic data management to enable data warehouse deployment in seconds.

Architecture of Modern Cloud Data Warehousing

Architecture for Modern Cloud Data Warehousing

Things good to know about Oracle  Autonomous Data Warehouse 18c

  • ADWC: Autonomous Data Warehouse Cloud Service, fully managed pre-configured data warehousing environment
  • ADWC Supports
    • Structured Query Language (SQL)
    • Business Intelligence Tools (BI)
  • ADWC is a PaaS offering and built on Top of Oracle Database 18c
  • ADWC till recently was available on OCI-C, from JUN18 it’s also available on OCI
  • ADWC is elastic, meaning you start with CPU Count & Storage and at any point in time scaleup or scale-down

Why Oracle Autonomous Data Warehouse?

  • A fully autonomous database capable of self-patching, self-tuning, upgrading itself while the system is running, eliminating manual human error-prone processing.
  • Based on the next generation cloud database platform using artificial intelligence including machine learning to deliver adaptive caching and indexing all powered by the Oracle Exadata engineered infrastructure.
  • Oracle customers have fine-grained control of pre-configured compute and storage resources allowing for independent scaleup and down to avoid overpaying for expensive, unused, fixed blocks of cloud resources.
  • Built-in machine learning technology eliminates manual configuration errors to ensure reliability. In addition, unlimited concurrent access combined with advanced clustering technology enables businesses to grow data stores without any downtime.

Another but very Obvious question, which will come across to your mind is, Since it is autonomous, Does it is going to steal the Jobs for DBAs?

The answer is no, below are the reason which will definitely clear all your doubts.

With Autonomous, you DON’T

  • You don’t need Database Administration (But you still need an admin, check task below)
  • No backups required (backup is done automatically)
  • No Patching or Upgrade Required (done automatically by Oracle)
  • Growing or Shrinking is NOT required
  • Doesn’t require any tuning like parallelism, indexing, or compression

With ADWC, you still need

  • Provision ADWC Instance
  • Start/Stop Service, Load Data in ADWC and run queries
  • Secure ADWC like who can see which instance & what they can do (Compartment, Roles, Login, Passwords)
  • Grant access to Users, BI Tools

Related/Further Reading

If you are just starting out in Cloud then I highly recommend you to go through these posts first

References:

Next Task

The post [Video] Oracle Autonomous Data Warehouse Cloud Service 18c Now on OCI appeared first on Oracle Trainings.

[Q/A]Oracle Database Cloud Service (DBCS) Certification (1Z0-160)

$
0
0

This post is part of Q/A series of Module 1: Cloud Overview, DBCS Offerings Create Account from our Oracle Database Cloud Service (DBCS) Certification (1Z0-160). These questions will help you in clearing your Database Cloud Service (DBCS) Certification (1Z0-160) and also if you have any question related to Oracle Cloud then you can ask in the comments section.

If you want to know about various  Oracle Database Cloud Service (DBCS) Overview & Offerings then I would highly recommend, going through this post from Oracle ACE Atul Kumar (Click Here)

Which two statements are true about the Database as a Service (DBaaS) instances and Oracle database instances that are provided by Oracle Public Cloud?

  1. A DBaaS instance requires customers to install any additional management tools for their environment
  2. A DBaaS instance never provides a pre-created Oracle database.
  3. An Oracle database instance that is provided as part of DBaaS runs the same executable that would be run with the same version and release of Oracle Database on private premises.
  4. A DBaaS instance always provides a customer-selected version of the Oracle database software.
  5. Only one Oracle database instance can run in a DBaaS instance on Oracle Public Cloud.

This question is from our  Oracle Database Cloud Service (DBCS) Certification (1Z0-160), where are covering each and every topic required to clear your 1z0-160 certification with both theoretical and practical approach.

Next task is for you is, to post the answers on the comment section and also let us know where you are struggling to clear the certification.

Please stay tuned for our next follow-up post where we will declaring the answer to this question and also another question from one of our Oracle Database Cloud Service (DBCS) Certification (1Z0-160)

Related/Further Reading

If you are just starting out in Cloud then I highly recommend you to go through these posts first

Next Task

So Your Next Tasks is, Join FREE Webinar on How To Build Your First Database On Cloud (PaaS)  Expert. Click on the image below to register for FREE.

The post [Q/A]Oracle Database Cloud Service (DBCS) Certification (1Z0-160) appeared first on Oracle Trainings.

Oracle GoldenGate 12c (12.3.0.1): New Features/Changes

$
0
0

This post covers the New Features/Changes of Oracle GoldenGate 12c (12.3.0.1.0). Oracle Goldengate is a software for real-time data integration and replication in heterogeneous IT Systems.

If you are new to Oracle GoldenGate then check our previous posts about Oracle GoldenGate 12c Overview & Components.

If you want to install Goldengate 12c then go through our post here Oracle GoldenGate 12c Download & Installation and for troubleshooting go through Oracle GoldenGate 12c: Troubleshooting using LogDump Utility

As of, August 18, 2017, the latest release of Oracle GoldenGate 12c (12.3.0.1.0) is available for download!

You can find the links to download Oracle GoldenGate 12c (12.3.0.1.0) at this link: http://www.oracle.com/technetwork/middleware/goldengate/downloads/index.html

New features:

1. GoldenGate 12.3 Platform Features – All Platforms For the Oracle Database

a) Microservices Architecture

A new service-based architecture that simplifies configuration, administration, and monitoring for large-scale and cloud deployments. The RESTFul services enable secure remote access using role-based authorization over HTTPS and WebSocket (streaming) protocols. Each service also has an embedded HTML5 browser-based UI for better user experience in addition to traditional command line access for ggsci style scripting/automation. It enables Applications to embed, automate, and orchestrate GoldenGate across the enterprise.

There are five main components of the Microservices Architecture. The following diagram depicts the contrast between Oracle GoldenGate MA and the Classic Architecture components and illustrates how replication processes operate with the secure REST API interfaces.

b) Support for Oracle Database 12.2.0.1

Oracle Database 12.2.0.1 provides many exciting features for organizations to use. Oracle GoldenGate 12.3 is designed to leverage many of these features in Oracle Database 12.2.0.1 as well. Organizations will have a fully supported and integrated replication framework that provides organizations with performance and throughput enhancements within the Integrated Capture, Integrated Apply, and many others processes.

c) Parallel Replicat
Highly scalable apply engine for the Oracle database that can automatically parallelize the apply workload taking into account dependencies between transactions. Parallel Replicat provides all the benefits of Integrated Replicat performing the dependency computation and parallelism outside the database. It parallelizes the reading and mapping of trail files and provides the ability to apply large transactions quickly in Oracle Database 11g (11.2.0.4) and later.

e) Automatic Conflict-Detection-Resolution (CDR) without application changes
Quickly enable active/active replication while ensuring consistent conflict-detection-resolution (CDR) without modifying application or database structure. With automatic CDR you can now configure and manage Oracle GoldenGate to automate conflict detection and resolution when it is configured in Oracle Database 12c Release 2 (12.2) and later.

f) Procedural Replication to enable simpler application migrations and upgrades
Procedural Replication in Oracle GoldenGate allows you to replicate Oracle-supplied PL/SQL procedures, avoiding the shipping and applying of high volume records usually generated by these operations.

g) Database Sharding
Oracle Sharding is a scalability and availability feature designed OLTP applications that enable distribution and replication of data across a pool of Oracle databases that share no hardware or software. The pool of databases is presented to the application as a single logical database. Data redundancy is managed by Oracle GoldenGate via Active-Active replication that is automatically configured and orchestrated through the database engine invoking the RestFul API’s.

h) Fast Start Capture
Fast Start Capture is a new feature for Integrated Capture that will improve overall performance and enable you to quickly start capturing and replicating transactions.

2. For SQL Server

a) Introducing a new, CDC based Capture

Oracle GoldenGate 12.3 will introduce a new Change Data Capture based Extract, which offers new functional advantages over our existing transaction log based capture method. Benefits include:

  • Capture from SQL Server 2016
  • Remote Capture
  • Transparent Data Encryption (TDE) support

b) Certification to capture, from an AlwaysOn primary and/or synchronous secondary database
With an increase in uptake of our customers running their application critical databases in an AlwaysOn environment, Oracle GoldenGate 12.3 is the first version to certify capture from either the Primary database or a read-only Synchronous Secondary database.

c) Delivery to SQL Server 2016 Enterprise Edition

3. For DB2 z/OS

a) Remote Execution
The new remote execution includes both remote capture and delivery for DB2 z/OS. Running Oracle GoldenGate off the z/OS server significantly reduces the MIPS consumption and allows the support of AES encryption and credential store management.

4. For DB2 i

a) Support for IBM i 7.3
Oracle GoldenGate supports the latest DB2 for i platform.

5. For MySQL

a) DDL replication between MySQL Databases
With the DDL replication between MySQL databases, there is no need to stop Oracle GoldenGate replication when there are DDL changes on the source database.

Reference:

This post is from our Oracle GoldenGate 12c Administration Training, in which we cover  Architecture, Installation, Configuring & Preparing the Environment, DML Replication – Online Change Synchronization, Initial Load, Zero Downtime Migration & Upgrading using GoldenGate, Oracle GoldenGate Security, Performance of Oracle GoldenGate and Troubleshooting and much more.

Do you have any queries in Oracle GoldenGate 12c?

Have a question related to your Oracle GoldenGate Career or Training?

Post any query regarding Oracle GoldenGate below and we will be happy to answer it.

The post Oracle GoldenGate 12c (12.3.0.1): New Features/Changes appeared first on Oracle Trainings.


Overview of Amazon Web Services & Concepts

$
0
0

This post covers Amazon AWS Overview, Concepts, Architecture & 5 reasons why one should start learning Amazon Cloud

This post is for beginners as well as for experienced such as DBA’s, Developers, System Admin etc who are just starting to learn Amazon Cloud and covers some of the very basic questions like What is Cloud Computing, Service Model,   Various Services offered by Amazon AWS & much more.

Introduction to Amazon Web Services (AWS)

In 2006, Amazon Web Services (AWS) began offering IT infrastructure services to businesses in the form of web services—now commonly known as cloud computing. One of the key benefits of cloud computing is the opportunity to replace up-front capital infrastructure expenses with low variable costs that scale with your business. With the cloud, businesses no longer need to plan for and procure servers and other IT infrastructure weeks or months in advance. Instead, they can instantly spin up hundreds or thousands of servers in minutes and deliver results
faster. Today, AWS provides a highly reliable, scalable, low-cost infrastructure platform in the cloud
that powers hundreds of thousands of businesses in 190 countries around the world.

Before going deep into Amazon AWS, let us understand the basics of Cloud Computing

What Is Cloud Computing?

Cloud computing is the on-demand delivery of computing power, database storage, applications, and other IT resources through a cloud services platform via the Internet with pay-as-you-go pricing. Cloud computing provides a simple way to access servers, storage, databases and a broad set of application services over the Internet. A cloud services platform such as Amazon Web Services owns and maintains the network-connected hardware required for these applications services, while you provision and use what you need via a web application.

Cloud Computing

Cloud Computing Service Models

  • Infrastructure as a Service (IaaS): Infrastructure as a Service (IaaS) contains the basic building blocks for cloud IT and typically provide access to networking features, computers (virtual or on dedicated hardware), and data storage space.
  • Platform as a Service (PaaS): Platform as a Service (PaaS) removes the need for your organization to manage the underlying infrastructure (usually hardware and operating systems) and allows you to focus on the deployment and management of your applications.
  • Software as a Service (SaaS): Software as a Service (SaaS) provides you with a completed product that is run and managed by the service provider. In most cases, people referring to Software as a Service are referring
    to end-user applications.

Cloud Computing service Models

Cloud Computing Deployment Models

There are four basic cloud deployment models, which are:

  • Private cloud model: In this system, the cloud infrastructure is set up on the premise for the exclusive use of an organization and its customers. In terms of cost efficiency, this deployment model doesn’t bring many benefits. However, many large enterprises choose it because of the security it offers.
  • Public cloud model: Public cloud is hosted on the premise of the service provider. The service provider then provides cloud services to all of its customers. This deployment is generally adopted by many small to mid-sized organizations for their non-core and some of their core functions.
  • Community cloud: Community cloud model is a cloud infrastructure shared by a group of organizations of similar industries and backgrounds with similar requirements i.e. mission, security, compliance, and IT policies. It may exist on or off premise and can be managed by a community of these organizations.
  • Hybrid cloud model: Hybrid cloud is a combination of two or more models, private cloud, public cloud or community cloud. Though these models maintain their separate entities they are amalgamated through a standard technology that enables the portability of data and applications.

Cloud Computing Deployment Model

Amazon Web Services Cloud Platform

AWS consists of many cloud services that you can use in combinations tailored to your business or organizational needs. This section introduces the major AWS services by category. To access the services, you can use the AWS Management Console, the Command Line Interface, or Software Development Kits (SDKs).

  • AWS Management Console: Access and manage Amazon Web Services through the AWS Management Console is a simple and intuitive user interface.
  • AWS Command Line Interface: The AWS Command Line Interface (CLI) is a unified tool to manage your AWS services.11 With just one tool to download and configure, you can control multiple AWS services from the
    command line and automate them through scripts.
  • Software Development Kits:  Software Development Kits (SDKs) simplify using AWS services in your applications with an Application Program Interface (API) tailored to your programming language or platform.
  • Compute
    Amazon EC2: Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale computing easier for developers. The Amazon EC2 simple web service interface allows you to obtain and configure capacity with minimal friction. It provides you with complete control of your computing resources and lets you run on Amazon’s proven computing environment.

Amazon EC2 Container Service: Amazon EC2 Container Service (ECS) is a highly scalable, high-performance container management service that supports Docker containers. It allows you to easily run applications on a managed cluster of Amazon EC2 instances.

Amazon EC2 Container Registry: Amazon EC2 Container Registry (ECR) is a fully-managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images. Amazon ECR is
integrated with Amazon EC2 Container Service (ECS), simplifying your development to production workflow

Amazon Lightsail: Amazon Lightsail is designed to be the easiest way to launch and manage a virtual private
server with AWS.

  • Storage
    Amazon S3: Amazon Simple Storage Service (Amazon S3) is object storage with a simple web service interface to store and retrieve any amount of data from anywhere on the web.

Amazon Elastic Block Store: Amazon Elastic Block Store (Amazon EBS) provides persistent block storage volumes for use with Amazon EC2 instances in the AWS Cloud.

Amazon Elastic File System: Amazon Elastic File System (Amazon EFS) provides simple, scalable file storage for use with Amazon EC2 instances in the AWS Cloud

Amazon Glacier:  Amazon Glacier is a secure, durable, and extremely low-cost storage service for data archiving
and long-term backup

  • Database
    Amazon Aurora: Amazon Aurora is a MySQL and PostgreSQL compatible relational database engine that
    combines the speed and availability of high-end commercial databases with the simplicity and
    cost-effectiveness of open source databases

Amazon RDS: Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale
a relational database in the cloud

Amazon DynamoDB: Amazon DynamoDB is a fast and flexible NoSQL database service for all applications that need consistent, single-digit millisecond latency at any scale

These are the glimpse of some of the services from Amazon AWS, In next follow up post, I’ll cover what thing you must learn & roadmap to Start Amazon AWS 

Amazon Web Services – Basic Architecture

AWS Architecture

Why AWS Certification – Here are Top 5 Reasons

1. Cloud is the Future of Business Technology:

Now a day’s cloud computing is the technology that every business wants. Why? Because it’s economical, fast, advanced with better features than conventional technology. No need for heavy integration or heavy maintenance. Cloud can give you all feature in a single platform.

Market Stats for Cloud Computing

 

2. AWS Certification Reasonable and Within Reach:

Now there are tons of certification for cloud computing provided by various vendors but Amazon is the only one which certification are less in cost. Although, It’s not that easy to get AWS certified you must have basic knowledge which is required to pass.

3. Demanded Skills are always Earning more Money!!! Right?

Market Stats

4. AWS Becomes the God of Cloud:

According to Gartner report AWS is having growth more than 10 times compared to their 14 competitors combined and their competitors are not a small player they are also well-known name in cloud computing industry e.g., Microsoft Azure, Google cloud platform, IBM SoftLayer, Rackspace, and Joyent, while they are busy in competing with each other as a software industry aspirant didn’t you notice the biggest opportunity? Yes AWS is expanding and all the big organizations working on AWS, so you need to grab this opportunity and become a certified AWS developer as soon as possible because we all know the competition in the software industry.

Cloud Stats

5. New Height to Your Expertise:

If you want to improve your skills or expertise in cloud computing field “then” AWS certification is best for you. AWS certification will boost your resume and your profile but for that, you must have the knowledge of AWS theoretically as well as practically and once you fix your path by doing initial certification “then” you can go for another certification or advanced certification in this field.

You will get to know all of this and deep-dive into each concept related to Amazon AWS, once you will get enrolled in our Amazon AWS Solution Architect 

Another question, which might come to your mind, What are all the things you will get when you enrolled!!

We are glad to tell you that:

Things you will get!!

  1. Live Instructor-led Online Interactive Sessions
  2. FREE unlimited retake for next 3 Years
  3. FREE On-Job Support for next 3 Years
  4. Training Material (Presentation + Videos) with Hands-on Lab Exercises mentioned
  5. Recording of Live Interactive Session for Lifetime Access
  6. 100% Money Back Guarantee (If you attend sessions, practice and don’t get results, We’ll do full REFUND, check our Refund Policy)

Next Task For You

Click on the image below to download your FREE Guide, 5 things you must know about Amazon AWS

The post Overview of Amazon Web Services & Concepts appeared first on Oracle Trainings.

Oracle EBS (R12): Database Cloning from RMAN backup

$
0
0

This post covers Oracle EBS (R12) Database Cloning from RMAN backup, one of the most common tasks that Oracle Apps DBAs do (apart from Installation, Patching & AD Administration).

If you are new to Oracle AppsDBA or already working as Apps DBA but on version 11i or R12.1 then suggest you first go through with below FREE videos from Oracle ACE, Author, and Oracle Apps Expert Atul Kumar

Oracle EBS (R12) Database Cloning using RMAN

Some situations require the database to be recreated separately, without using Rapid Clone. Typical scenarios are when system downtime is not feasible, or advanced database replication tools like RMAN are being used to copy the database in hot backup mode.

The cloning process consists of three phases, each of which is made up of several logical sections and their steps.

1. Prepare Source Database System

a) Log on to the target system as the ORACLE user
b) Execute preclone on Database tier of the source system. (For this example, DEMO is my source system)
$ ORACLE_HOME/appsutil/scripts/
$ perl adpreclone.pl dbTier

2. Copy the source database to the target system

a) Log on to the source system database node as the ORACLE user, and then:

  1. Perform a normal shutdown of the source system database
  2. Copy the database (.dbf) files from the source system to the target system
  3. Copy the source database ORACLE_HOME to the target system
  4. Start the source Applications system database and application tier processes

3. Configure Target Database System

This section documents the steps needed to allow the manual creation of the target database control files within the Rapid Clone process. This method needs to be used for databases located on raw partitions, or when cloning a hot backup.

a) Log on to the target system as the ORACLE user

b) Configure the [RDBMS ORACLE_HOME]
$ cd [RDBMS ORACLE_HOME]/appsutil/clone/bin
$ perl adcfgclone.pl dbTechStack

c) Create the target database control files manually

In this step, you copy and recreate the database using your preferred method, such as RMAN restore, Flash Copy, Snap View, or Mirror View.

d) Start the target database in open mode

e) Run AutoConfig in the INSTE8_SETUP mode on the database tier as follows:
$ sh <RDBMS_ORACLE_HOME>/appsutil/bin/adconfig.sh contextfile=<CONTEXT_FILE> run=INSTE8_SETUP

f) Run the library update script against the database
$ cd [RDBMS ORACLE_HOME]/appsutil/install/[CONTEXT NAME]
$ sqlplus “/ as sysdba” @adupdlib.sql [libext]

Where [libext] should be set to ‘sl’ for HP-UX, ‘so’ for any other UNIX platform, or ‘dll’ for Windows.

g) Configure the Target database.
The database must be running and open before performing this step.

$ cd <RDBMS ORACLE_HOME>/appsutil/clone/bin
$ perl adcfgclone.pl dbconfig <Database Target Context File>

Where Database target context file is: [RDBMS ORACLE_HOME]/appsutil/[Target CONTEXT_NAME].xml.

Note: The dbconfig option will configure the database with the required settings for the new target, but it will not recreate the control files.

Reference: 

  • Cloning Oracle E-Business Suite Release 12.2 with Rapid Clone (Doc ID 1383621.1)

This post is from our Oracle Apps DBA (R12.2) Training, in which we cover  Architecture & Changes in Oracle E-Business Suite R12.2, Staging & Installation, File System & Important Files in R12.2, Start/Stop, Patching, AD Administration, Cloning, Concurrent Managers, AutoConfig, Password Management and Troubleshooting and much more.

Next task for you

Download Your Free Guide by clicking on the below image down learn Cloning in EBS R12 for Apps DBA

Oracle EBS (R12): 5 cloning must read docs

Are you having any queries or hitting any issues in R12.2 cloning?

If you like this post then don’t forget to share with your Apps DBA Friends

The post Oracle EBS (R12): Database Cloning from RMAN backup appeared first on Oracle Trainings.

[Video] Oracle WebLogic Administration: Weblogic Domain Topology

$
0
0

In this video, we are going to look at Oracle Weblogic Domain. For those who are newA Domain is an interrelated set of WebLogic Server resources managed as a unit. A Domain includes one or more administration servers and managed servers. Various clients use the administration server to configure the system. The managed server is used to run actual applications.To know in detail about Domain, go through the video below.

If you are a beginner and want to learn Oracle Weblogic Server Administration then check our blog post here where Atul covers Weblogic Domain Tasks and Tools.

What are the Basic Components of Weblogic Server:

Below are the basic weblogic components

  • Domains
  • Admin Server
  • Managed Server
  • Node Manager
  • Weblogic Server Cluster

What is the Domain in Weblogic server:

  • Domain is a logically related group of Oracle WebLogic Server resources that are managed as a single unit
  • Domain Provides one point of administration
  • Can logically separate: A) Development, test, and production applications B) Organizational divisions
  • Domain will have 1 Admin Server and 0..N Managed Server and 0..N Cluster
  • Resources & Services of Domain are Machine,Network Channel (Port & Protocol),Virtual Hosts,JDBC and JMS
  • Domain Configuration is stored in File Based Repository $DOMAIN_HOME/config/config.xml

Building Blocks of Domain:

  • Domain will have a Domain Home
  • Servers: Admin Server and Managed Server
  • Clusters
  • Machines
  • Domain Directory
  • Each Domain will have it’s own WebLogic Admin Console

How Domain will Organize:

  • We can create a multiple domains from a single Weblogic Installation
  • We can have a multiple Installations with a single domain

So this is all in nutshell about Oracle Weblogic Domain. Please go through the video to know in detail.

We cover this in one of the modules of our Oracle WebLogic Training, where we also cover Architecture, File System, JDBC, JMS, HA, Clustering, Security, Patching, Upgrade, Backup, and Recovery etc.

Did You Find this Video useful?

Leave a Comment.

Related Posts

  • [Video] Oracle Weblogic Server: Weblogic Admin Tasks & Tools.. Click Here
  • Troubleshooting Oracle Weblogic Server: Startup Issue: OutOfMemoryError PermGen Space.. CLick Here

Join Community

Join 3500+ Oracle Professionals like you to discuss Oracle Weblogic Server, Ask Questions or Help Others in Private Facebook Group for Oracle Weblogic Server .

What’s Next

  • What is Server, Admin and Managed Servers?
  • What is Weblogic Domain Home?

Leave a comment if you know answers or if you want to know an answer to these questions (share which one) and If I see enough questions then I’ll cover about WebLogic Servers.

Are you planning to Learn WebLogic Server or would like to check some of the common Oracle WebLogic Interview Questions then get the from here (sent over email)

The post [Video] Oracle WebLogic Administration: Weblogic Domain Topology appeared first on Oracle Trainings.

Oracle E-Business Suite (R12) On OCI for Apps DBAs

$
0
0

This blog gives 100 ft overview of Oracle application on OCI – Lift and Shift. Right from Consideration to Implementation of Oracle Application instances all the point will be covered

Note: OCI is part of IaaS Service model (other 2 Cloud Service models are SaaS & PaaS), where OCI is re-branding of Bare Metal Cloud Service (BMCS)

EBS on OCI – Why and How?

Running EBS on OCI the 2nd generation of IaaS Service Offered by Oracle. OCI consists of one of the best Enterprise-grade Infrastructure offerings and has many benefits against many cloud IaaS providers. The biggest benefit I could think of for the same is running Oracle on Oracle so complete Support, compliance and low latency network will be promised without say. Apart from this High availability and maximum performance  and throughput of Oracle

Application from OCI can be leveraged by considering incomplete list as below:

  • OCI is having multiple Compute options – VM and Bare Metal
  • Quick Provisioning of instances
  • Storage – Object and Block Storage with High and Dense IOPS.
  • Low Network Latency
  • Highly Secure Network )Public/Private Subnets)
  • Pay as you go, Model
  • Compliance with SOX, HIPPA etc
  • Automated Backup and Recovery
  • RAC Support
  • High Availability and DR
  • ScaleOut of all tiers

Now we can have multiple options of running EBS on OCI. We can go with fresh installation of EBS on OCI and use it for Production or Demo Purpose or we can bring our on-premise EBS migrated to OCI, that is Lift and Shift.  So its question of Reimplementation and migration and both goes well as per the requirement and suitability.

Usually for re-implementation SaaS offering is available by Oracle which is fusion application product-specific instances and can be procured as public or private cloud on need basis from Oracle.

Things Good to Know

  • Before moving Oracle applications to OCI consider the current architecture of Oracle application and options and offerings available at Oracle. Maybe you can get much better option within budget.
  • Select the Best Suitable offering at Oracle. Which one to go with
  • Collect the Peak loads timings and performance of Current On-premise applications and databases. Based on that you can select the server resources. OCI has got a wide variety of shapes available

Check the deployment Options available for Oracle EBS and based on the requirement you can select the options. Below gives high-level options overview.

Multiple options for Database to get better and wider choices

  1. High-Performance Database tier
  2.  High Availability – RAC and Dataguard.
  3. Oracle Exadata
  • Manual and Automated Lift and shift options available. The mostly Automatic option using EBS Cloud Admin Tool is preferred
  • Automated Lift and Shift work in 2 steps Automated Lift and Shift works in 2 steps
  • Backup the instance using Oracle Cloud Backup utility
  • Create a new instance using Backup using EBS Cloud Admin Tool.

Quick 12 points to migrate EBS to OCI list is here:

  1. Prepare OCI tenancy environment, either manually or automatically define VCN.
  2. Prepare Compartment, network resources and create associate keys required.
  3. Prepare OCI tenancy with DBCS/Exadata CS.
  4. Create Subnets, Routing tables etc for Application and Database.
  5. Install and configure EBS Cloud Admin Tool.
  6. Always take the latest one.
  7. Configure the Backup Utility to take on-premise backup to Cloud Storage.
  8. Prepare the Configuration file which will automate the process.
  9. Prepare the Stage Area to take Backup of database and applications.
  10. Backup Oracle application Environment to cloud.
  11. Create the EBS environment from Backup taken above Post Lift and Shift Activities – Passwords
  12. Access the application and perform all basic checks

EBS Complete Architecture on OCI will look like:

Finally, we need to have knowledge of OCI and EBS on Cloud to run EBS on OCI as basic preparation of OCI for Deploying Instance and Provisioning, Migration and Maintenance of Oracle ERP – you need to have knowledge of EBS from Cloud Perspective. Please have a look at our:

Related/Further Reading

If you are new to Oracle Cloud Infrastructure (OCI) then I would suggest you look our previous blog by Atul on Oracle Cloud Infrastructure (OCI): Region, AD, Tenancy, Compartment, VCN, IAM, Storage Service

Another offering in IaaS from Oracle is OCI-Classic (or OCI-C) and to find the difference between two and when to use what, Check my previous post OCI vs OCI-C here

If you are just starting out in Cloud then I highly recommend you to go through these first

Did You Start Your Oracle Cloud Journey?   Get a FREE Copy in your Inbox with Steps to Register for Oracle Cloud and get 300 USD FEE Credit to Practice

The post Oracle E-Business Suite (R12) On OCI for Apps DBAs appeared first on Oracle Trainings.

[Video]: Oracle Access Manager (OAM) Troubleshooting: OAM Console Login Issue

$
0
0

One of our Trainee reported that since they have started integration of Oracle Access Manager with Oracle Identity Manager, the oamconsole was not working anymore.

So if you are in a situation like that then how would you troubleshoot that and how would you log in to Oracle Access Manager oamconsole. In this tech tip video, we are covering troubleshooting of Oracle Access Manager(OAM) oamconsole login issue.

As shown in video, user was unable to login to oamconsole (http://<hostname>:<OAM_Port>/oamconsole), it was redirecting to http://<hostname>:7777/oam/server/….

First of all, you need to understand why it is redirecting to 7777 port. Typically 7777 port is for Oracle http server so it is redirecting to the web server. As it is OAM, so it is redirecting to OAM login page. Now the question is why it is redirecting to OAM login page and answer you can find in WebLogic console for OAM as shown in the video in detail.

So in Weblogic Console for OAM, under Security Realm->myrealm->Providers, Here you will see IAMSuiteAgent that gets added into the WebLogic domain where Oracle Access Manager is deployed. IAMSuiteAgent is a Java webgate or the Policy Enforcement Point (PEP) sitting on top of WebLogic Server with OAM server to protect or redirect the user to Single sign-on (SSO) page whenever someone tries to access the application on that. So when we integrate OAM with OIM, we delete this IAMSuiteAgent from here and deploy it with webgate. So remove this IAMSuiteAgent and restart the WebLogic domain. Now try to access the Oracle Access Manager oamconsole link and you should be able to access it now.

Just go through this video to learn how Oracle Access Manager is protected or oamconsole is protected and why the user was redirected to 7777 port at the start. We saw IAMSuiteAgent was the main one culprit but where exactly this 7777 port is mentioned. So in order to find that out, just go through this useful tech tip video and Stay tuned for our next video about Proxy Port.

 

We cover this troubleshooting in our Oracle Access Manager Training. We cover a lot of other topics like OAM, FMW and WebLogic concepts, OID, OHS, OAM integration with other oracle products, Cloning, HA, DR and much more. Please check our Step by Step Activity Guide You Must Perform to become Expert in IDM to see what all things we cover in this training.

Did You Find this Video useful?

or

What more topics you want to see as part of 5 Minute Tech Tips on Oracle Access Manager?

Leave a Comment.

Related Posts

 

Next Task For You

If you are looking for commonly asked interview questions for Oracle Access Manager then just click here and get that in your inbox.

 

If you are looking for Kickstart Your Journey to Oracle Identity & Access Management On-Premise or Cloud then just click below and get that in your inbox.

 

The post [Video]: Oracle Access Manager (OAM) Troubleshooting: OAM Console Login Issue appeared first on Oracle Trainings.

Viewing all 1887 articles
Browse latest View live