This page was exported from IT Certification Exam Braindumps [ http://blog.braindumpsit.com ] Export date:Sun Oct 6 18:20:15 2024 / +0000 GMT ___________________________________________________ Title: Oracle 1Z0-1084-21 Questions and Answers Guarantee you Oass the Test Easily [Q30-Q54] --------------------------------------------------- Oracle 1Z0-1084-21 Questions and Answers Guarantee you Oass the Test Easily Share Latest 1Z0-1084-21 DUMP with 75 Questions and Answers NEW QUESTION 30A pod security policy (PSP) is implemented in your Oracle Cloud Infrastructure Container Engine for Kubernetes cluster Which rule can you use to prevent a container from running as root using PSP?  NoPrivilege  RunOnlyAsUser  MustRunAsNonRoot  forbiddenRoot ExplanationWhat is a Pod Security Policy?A Pod Security Policy is a cluster-level resource that controlssecurity sensitive aspects of the pod specification. The PodSecurityPolicy objects define a set of conditions that a pod must run with inorder to be accepted into the system, as well as defaults for the related fields. They allow an administrator to control the following:Privilege EscalationThese options control the allowPrivilegeEscalation container option. This bool directly controls whether the no_new_privs flag gets set on the container process. This flag will prevent setuid binaries from changing the effective user ID, and prevent files from enabling extra capabilities (e.g. it will prevent the use of the ping tool). This behavior is required to effectively enforce MustRunAsNonRoot.example:# Require the container to run without root privileges.rule: ‘MustRunAsNonRoot’NEW QUESTION 31In the sample Kubernetesmanifest file below, what annotations should you add to create a private load balancer In oracle Cloud infrastructure Container Engine for Kubermetes?A)B)C)D)  Option A  Option B  Option C  Option D Explanationhttps://docs.cloud.oracle.com/en-us/iaas/Content/ContEng/Tasks/contengcreatingloadbalancer.htm?TocPath=Ser Creating Internal Load Balancers in Public and Pr You can create Oracle Cloud Infrastructure load balancers to control access to services running on a cluster:When you create a ‘custom’ cluster, you select an existing VCN that contains the network resources to be used by the new cluster. Ifyou want to use load balancers to control traffic into the VCN, you select existing public or private subnets in that VCN to host the load balancers.When you create a ‘quick cluster’, the VCN that’s automatically created contains a public regional subnetto host a load balancer. If you want to host load balancers in private subnets, you can add private subnets to the VCN later.Alternatively, you can create an internal load balancer service in a cluster to enable other programs running in the same VCN asthe cluster to access services in the cluster. You can host internal load balancers in public subnets and private subnets.To create an internal load balancer hosted on a public subnet, add the following annotation in the metadata section of the manifest file:service.beta.kubernetes.io/oci-load-balancer-internal: “true”To create an internal load balancer hosted on a private subnet, add both following annotations in the metadata section of the manifest file:service.beta.kubernetes.io/oci-load-balancer-internal: “true”service.beta.kubernetes.io/oci-load-balancer-subnet1: “ocid1.subnet.oc1..aaaaaa….vdfw” where ocid1.subnet.oc1..aaaaaa….vdfw is the OCID of the private subnet.NEW QUESTION 32What is the difference between blue/green and canary deployment strategies?  In blue/green, application Is deployed In minor increments to a select group of people. In canary, both old and new applications are simultaneously in production.  In blue/green, both old and new applications are in production at the same time. In canary, application is deployed Incrementally to a select group of people.  In blue/green, current applications are slowly replaced with new ones. In < MW y, Application ll deployed incrementally to a select group of people.  In blue/green, current applications are slowly replaced with new ones. In canary, both old and new applications are In production at the same time. ExplanationBlue-green deployment is a technique that reduces downtime and risk by running two identical production environments called Blue and Green. At any time, only one of the environments is live, with the live environment serving all production traffic. For this example, Blue is currently live and Green is idle.https://docs.cloudfoundry.org/devguide/deploy-apps/blue-green.htmlCanary deployments are a pattern for rolling out releases to a subset of users or servers. The idea is to first deploy the change to a small subset of servers, test it, and then roll the change out to the rest of the servers. …Canaries were once regularly used in coal mining as an early warning system.https://octopus.com/docs/deployment-patterns/canary-deploymentsNEW QUESTION 33Which two arerequired to enable Oracle Cloud Infrastructure (OCI) Container Engine for Kubernetes (OKE) cluster access from the kubect1 CLI?  An SSH key pair with the public key added to cluster worker nodes  Install and configure the OCI CLI  OCI Identity and Access Management Auth Token  Tiller enabled on the OKE cluster  A configured OCI API signing key pair ExplanationSetting Up Local Access to ClustersTo set up a kubeconfig file to enable access to a cluster using a local installation of kubectl and theKubernetes Dashboard:Step 1: Generate an API signing key pairStep 2: Upload the public key of the API signing key pairStep 3: Install and configure the Oracle Cloud Infrastructure CLIStep 4: Set up the kubeconfig fileStep 5: Verify that kubectl can access the clusterReferences:https://docs.cloud.oracle.com/en-us/iaas/Content/ContEng/Tasks/contengdownloadkubeconfigfile.htmNEW QUESTION 34Who is responsiblefor patching, upgrading and maintaining the worker nodes in Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE)?  It Is automated  Independent Software Vendors  Oracle Support  The user ExplanationAfter a new version of Kubernetes has been released and when Container Engine for Kubernetes supports the new version, you can use Container Engine for Kubernetes to upgrade master nodes running older versions of Kubernetes. Because Container Engine for Kubernetes distributes the KubernetesControl Plane on multiple Oracle-managed master nodes (distributed across different availability domains in a region where supported) to ensure high availability, you’re able to upgrade the Kubernetes version running on master nodes with zero downtime.Having upgraded master nodes to a new version of Kubernetes, you can subsequently create new node pools running the newer version. Alternatively, you can continue to create new node pools that will run older versions of Kubernetes (providing those older versions are compatible with the Kubernetes version running on the master nodes).Note that you upgrade master nodes by performing an ‘in-place’ upgrade, but you upgrade worker nodes by performing an ‘out-of-place’ upgrade. To upgrade the version of Kubernetesrunning on worker nodes in a node pool, you replace the original node pool with a new node pool that has new worker nodes running the appropriate Kubernetes version. Having ‘drained’ existing worker nodes in the original node pool to prevent new pods starting and to delete existing pods, you can then delete the original node pool.Upgrading the Kubernetes Version on Worker Nodes in a Cluster:After a new version of Kubernetes has been released and when Container Engine for Kubernetes supports the new version, you can use Container Engine for Kubernetes to upgrade master nodes running older versions of Kubernetes. Because Container Engine for Kubernetes distributes the Kubernetes Control Plane on multiple Oracle-managed master nodes (distributed across different availability domains in a region where supported) to ensure high availability, you’re able to upgrade the Kubernetes version running on master nodes with zero downtime.You can upgrade the version of Kubernetes running on the worker nodes in a clusterin two ways:(A) Perform an ‘in-place’ upgrade of a node pool in the cluster, by specifying a more recent Kubernetes version for new worker nodes starting in the existing node pool. First, you modify the existing node pool’s properties to specify the morerecent Kubernetes version. Then, you ‘drain’ existing worker nodes in the node pool to prevent new pods starting, and to delete existing pods. Finally, you terminate each of the worker nodes in turn.When new worker nodes are started in the existing node pool, they run the more recent Kubernetes version you specified. See Performing an In-Place Worker Node Upgrade by Updating anExisting Node Pool.(B) Perform an ‘out-of-place’ upgrade of a node pool in the cluster, by replacing the original node pool with a new node pool. First, you create a new node pool with a more recent Kubernetes version. Then, you ‘drain’ existing worker nodes in the original node pool to prevent new pods starting, and to delete existing pods.Finally, you delete the original node pool. When new worker nodes are started in the new node pool, they run the more recent Kubernetes version you specified. See Performing an Out-of-Place Worker Node Upgrade by Replacing an Existing Node Pool with a New Node Pool.Note that in both cases:The more recent Kubernetes version you specify for the worker nodes in the node pool must be compatible with the Kubernetes version running on the master nodes in the cluster. See Upgrading Clusters to Newer Kubernetes Versions).You must drain existing worker nodes in the original node pool. If you don’t drain the worker nodes, workloads running onthe cluster are subject to disruption.References:https://docs.cloud.oracle.com/en-us/iaas/Content/ContEng/Tasks/contengupgradingk8sworkernode.htmNEW QUESTION 35You are tasked with developing an application that requires the use of Oracle Cloud Infrastructure (OCI) APIs to POST messages to a stream in the OCI Streaming service.Which statement is incorrect?  The request must include an authorization signing string including (but not limited to) x-content-sha256, content-type, and content-length headers.  The Content-Type header must be Set to application/j son  An HTTP 401 will be returned if the client’s clock is skewed more than 5 minutes from the server’s.  The request does not require an Authorization header. ExplanationAuthorization HeaderThe Oracle Cloud Infrastructure signature uses the “Signature” Authentication scheme (with an Authorization header), and not the Signature HTTP header.Required Credentials and OCIDsYou need an APIsigning key in the correct format. See Required Keys and OCIDs.You also need the OCIDs for your tenancy and user. See Where to Get the Tenancy’s OCID and User’s OCID.Summary of Signing StepsIn general, these are the steps required to sign a request:Form the HTTPS request (SSL protocolTLS 1.2 is required).Create the signing string, which is based on parts of the request.Create the signature from the signing string, using your private key and the RSA-SHA256 algorithm.Add the resulting signature and other required information to the Authorization header in the request.References:https://docs.cloud.oracle.com/en-us/iaas/Content/Streaming/Concepts/streamingoverview.htmhttps://docs.cloud.oracle.com/en-us/iaas/Content/API/Concepts/signingrequests.htmNEW QUESTION 36You are working on a cloud native e-commerce application on Oracle Cloud Infrastructure (OCI). Your application architecture has multiple OCI services, including Oracle Functions. You need to trigger these functions directly from other OCI services, without having to run custom code.Which OCI service cannot trigger your functions directly?  OCI Events Service  OCI Registry  OCI API Gateway  Oracle Integration ExplanationOverview of Functions:Oracle Functions is a fully managed, multi-tenant, highly scalable, on-demand, Functions-as-a-Service platform. It is built on enterprise-gradeOracle Cloud Infrastructure and powered by the Fn Project open source engine. Use Oracle Functions (sometimes abbreviated to just Functions) when you want to focus on writing code to meet business needs.The serverless and elastic architecture of Oracle Functions means there’s no infrastructure administration or software administration for you to perform. You don’t provision or maintain compute instances, and operating system software patches and upgrades are applied automatically. Oracle Functions simply ensures your app is highly-available, scalable, secure, and monitored. With Oracle Functions, you can write code in Java, Python, Node, Go, and Ruby (and for advanced use cases, bring your own Dockerfile, and Graal VM).You can invoke a function that you’vedeployed to Oracle Functions from:– The Fn Project CLI.– The Oracle Cloud Infrastructure SDKs.– Signed HTTP requests to the function’s invoke endpoint. Every function has an invoke endpoint.– Other Oracle Cloud services (for example, triggered by an event in the Events service) or from external services.so You can then deploy your code, call it directly or trigger it in response to events, and get billed only for the resources consumed during the execution.Invoking Oracle Functions from Other OracleCloud Infrastructure Services:You can invoke functions in Oracle Functions from other Oracle Cloud Infrastructure services. Typically, you’ll want an event in another service to trigger a request to invoke a function defined in Oracle Functions.This functionality is currently available in:A:The Events service. For more information, see Overview of Events.B:The Notifications service. For more information, see Notifications Overview. For a scenario, see Scenario A: Automatically Resize VMs.C:The API Gateway service. For more information, see Adding a Function in Oracle Functions as an API Gateway Back End.D:The Oracle Integration service, using the OCI Signature Version 1 security policy. For more information, see Configure Oracle Integration to CallOracle Cloud Infrastructure Functions with the REST Adapter in Using the REST Adapter with Oracle Integration.so OCI Registry services cannot trigger yourfunctions directlyReferences:https://docs.cloud.oracle.com/en-us/iaas/Content/Functions/Tasks/functionsintegratingwithother.htmhttps://docs.cloud.oracle.com/en-us/iaas/Content/Functions/Concepts/functionsoverview.htmhttps://blogs.oracle.com/cloud-infrastructure/announcing-notifications-triggers-for-serverless-functionsNEW QUESTION 37What is the open source engine for Oracle Functions?  Apache OpenWhisk  OpenFaaS  Fn Project  Knative Explanationhttps://www.oracle.com/webfolder/technetwork/tutorials/FAQs/oci/Functions-FAQ.pdf Oracle Functions is a fully managed, multi-tenant, highly scalable, on-demand, Functions-as-a-Service platform. It is built on enterprise-grade Oracle Cloud Infrastructure and powered by the Fn Project open source engine. Use Oracle Functions (sometimes abbreviated to just Functions) when you want to focus onwriting code to meet business needs.NEW QUESTION 38You are building a cloud native, serverless travel application with multiple Oracle Functions in Java, Python and Node.js. You need to build and deploy these functions to a single applications named travel-app.Which command will help you complete this task successfully?  oci fn function deploy –ap travel-ap –all  fn -v deploy –ap travel-ap — all  oci fn application –application-name-ap deploy –all  fn function deploy –all –application-name travel-ap ExplanationTo get started with Oracle Functions:Creating, Deploying, and Invoking a Helloworld FunctionStep 6- Changedirectory to the newly created helloworld-func directory.Step 7- Enter the following single Fn Project command to build the function and its dependencies as a Docker image called helloworld-func, push the image to the specified Docker registry, and deploythe function to Oracle Functions in the helloworld-app:$ fn -v deploy –app helloworld-appThe -v option simply shows more detail about what Fn Project commands are doing (see Using the Fn Project CLI with Oracle Functions).References:https://docs.cloud.oracle.com/en-us/iaas/Content/Functions/Tasks/functionscreatingfirst.htmNEW QUESTION 39You are deploying an API via Oracle Cloud Infrastructure (OCI) API Gateway and you want to implement request policies to control access Which is NOT available in OCI API Gateway?  Limiting the number of requests sent to backend services  Enabling CORS (Cross-Origin Resource Sharing) support  Providing authentication and authorization  Controlling access to OCI resources ExplanationAdding Request Policies and Response Policies to API Deployment Specifications:You can control thebehavior of an API deployment you create on an API gateway by adding request and response policies to the API deployment specification:a request policy describes actions to be performed on an incoming request from a caller before it is sent to a back end a response policy describes actions to be performed on a response returned from a back end before it is sent to a caller You can use request policies to:limit the number of requests sent to back-end servicesenable CORS (Cross-Origin Resource Sharing) supportprovide authentication and authorizationYou can add request and response policies that apply globally to all routes in an API deployment specification, and also (in some cases) request and response policies that apply only to particular routes.Notethe following:No response policies are currently available.API Gateway request policies and response policies are different to IAM policies, which control access to Oracle Cloud Infrastructure resources.You can add request and response policies to anAPI deployment specification by:using the Consoleediting a JSON fileReferences:https://docs.cloud.oracle.com/en-us/iaas/Content/APIGateway/Tasks/apigatewayaddingrequestpolicies.htmNEW QUESTION 40You are developing a serverless application with Oracle Functions. Your function needs to store state in a database. Your corporate security Standards mandate encryption of secret information like database passwords.As a function developer, which approach should you follow to satisfy this security requirement?  Use the Oracle Cloud Infrastructure Console and enter the password in the function configuration section in the provided input field.  Use Oracle Cloud Infrastructure Key Management to auto-encrypt the password. It will inject the auto-decrypted password inside your function container.  Encrypt the password using Oracle Cloud Infrastructure Key Management. Decrypt this password in your function code withthe generated key.  All function configuration variables are automatically encrypted by Oracle Functions. ExplanationOracle Functions: Using Key Management To Encrypt And Decrypt Configuration Variables Since this process involves multiple steps, I thought it would be helpful to give you an outline of the steps that we’re going to take:Create a KMS vaultCreate a Master Encryption KeyGenerate a Data Encryption KeyUse the DEK plaintext return value to encrypt the sensitive value (offline) Store the encrypted sensitive value as a config variable in the serverless application Store the DEK ciphertext and the initVector used to encrypt the sensitive value as Function config variables Within the function, decrypt theDEK ciphertext back into plaintext using the OCID and Cryptographic Endpoint by invoking the OCI KMS SDK Decrypt the sensitive value using the decrypted DEK plaintext and the initVectorReferences:https://blogs.oracle.com/developers/oracle-functions-using-key-management-to-encrypt-and-decrypt-configuratiohttps://docs.oracle.com/en/database/other-databases/essbase/19.3/essad/encrypt-values-using-kms.htmlNEW QUESTION 41Which two are benefits of distributed systems?  Privacy  Security  Ease of testing  Scalability  Resiliency Explanationdistributed systems of native-cloud like functions that have a lot of benefit like Resiliency and availability Resiliency andavailability refers to the ability of a system to continue operating, despite the failure or sub-optimal performance of some of its components.In the case of Oracle Functions:The control plane is a set of components that manages function definitions.Thedata plane is a set of components that executes functions in response to invocation requests.For resiliency and high availability, both the control plane and data plane components are distributed across different availability domains and fault domains ina region. If one of the domains ceases to be available, the components in the remaining domains take over to ensure that function definition management and execution are not disrupted.When functions are invoked, they run in the subnets specified for theapplication to which the functions belong.For resiliency and high availability, best practice is to specify a regional subnet for an application (or alternatively, multiple AD-specific subnets in different availability domains). If an availability domainspecified for an application ceases to be available, Oracle Functions runs functions in an alternative availability domain.Concurrency and ScalabilityConcurrency refers to the ability of a system to run multiple operations in parallel using shared resources.Scalability refers to the ability of the system to scale capacity (both up and down) to meet demand.In the case of Functions, when a function is invoked for the first time, the function’s image is run as a container on an instance in a subnetassociated with the application to which the function belongs. When the function is executing inside the container, the function can read from and write to other shared resources and services running in the same subnet (for example, Database as a Service).The function can also read from and write to other shared resources (for example, Object Storage), and other Oracle Cloud Services.If Oracle Functions receives multiple calls to a function that is currently executing inside a running container, Oracle Functions automatically and seamlessly scales horizontally to serve all the incoming requests. Oracle Functions starts multiple Docker containers, up to the limit specified for your tenancy. The default limit is 30 GB of RAM reserved for function execution per availability domain, although you can request an increase to this limit. Provided the limit is not exceeded, there is no difference in response time (latency) between functions executing on the different containers.NEW QUESTION 42Which statement accurately describes Oracle Cloud Infrastructure (OCI) Load Balancer integration with OCI Container Engine for Kubernetes (OKE)?  OKE service provisions an OCI Load Balancer instance for each Kubernetes service with LoadBalancer type in the YAML configuration.  OCI Load Balancer instance provisioning is triggered by OCI Events service for each Kubernetes service with LoadBalancer type in the YAML configuration.  OCI Load Balancer instance must be manually provisioned for each Kubernetesservice that requires traffic balancing.  OKE service provisions a single OCI Load Balancer instance shared with all the Kubernetes services with LoadBalancer type in the YAML configuration. ExplanationIf you are running your Kubernetes cluster on Oracle Container Engine for Kubernetes (commonly known as OKE), you can have OCI automatically provision load balancers for you by creating a Service of type LoadBalancer instead of (or in addition to) installing an ingress controller like Traefik or Voyage YAML fileWhen you apply this YAML file to your cluster, you will see the new service is created. After a short time (typically less than a minute) the OCI Load Balancer will be provisioned.https://oracle.github.io/weblogic-kubernetes-operator/faq/oci-lb/NEW QUESTION 43A developer using Oracle Cloud Infrastructure (OCI) API Gateway must authenticate the API requests to their web application. The authentication process must be implementedusing a custom scheme which accepts string parameters from the API caller. Which method can the developer use In this scenario?  Create an authorizer function using request header authorization.  Create an authorizer function using token-based authorization.  Create a cross account functions authorizer.  Create an authorizer function using OCI Identity and Access Management based authentication ExplanationUsing Authorizer Functions to Add Authentication and Authorization to API Deployments:You can control access to APIs you deploy to API gateways using an ‘authorizer function’ (as described in this topic), or using JWTs (as described in Using JSON Web Tokens (JWTs) to Add Authentication and Authorization to API Deployments).You can add authentication and authorization functionality to API gateways by writing an ‘authorizer function’ that:1. Processes request attributes to verify the identity of a caller with an identity provider.2.Determines the operations that the caller is allowed to perform.3.Returns the operations the caller is allowed to perform asa list of ‘access scopes’ (an ‘access scope’ is an arbitrary string used to determine access).Optionally returns a key-value pair for use by the API deployment. For example, as a context variable for use in an HTTP back end definition (see Adding Context Variables to Policies and HTTP Back End Definitions).Create an authorizerfunction using request header authorization implemented using a custom scheme which accepts string parameters from the API caller.Managing Input ParametersIn our case we will need to manage quite a few static parameters in our code. For example the URLs of the secrets service endpoints, the username and other constant parameterised data. We can manage these either at Application or Function level (an OCI Function is packaged in an Application which can contain multiple Functions). In this case I will create function level parameters. You can use the following command to create the parameters:fn config function test idcs-assert idcsClientId aedc15531bc8xxxxxxxxxxbd8a193References:https://technology.amis.nl/2020/01/03/oracle-cloud-api-gateway-using-an-authorizer-function-for-client-secret-ahttps://docs.cloud.oracle.com/en-us/iaas/Content/APIGateway/Tasks/apigatewayusingauthorizerfunction.htmhttps://www.ateam-oracle.com/how-to-implement-an-oci-api-gateway-authorization-fn-in-nodejs-that-accesses-oNEW QUESTION 44Your Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE) administrator has created an OKE cluster with one node pool in a public subnet. You have been asked to provide a log file from one of the nodes for troubleshooting purpose.Which step should you take to obtain the log file?  ssh into the node using public key.  ssh into the nodes using private key.  It is impossible since OKE is a managed Kubernetes service.  Use the username open and password to login. ExplanationKubernetes cluster is a group of nodes. The nodes are the machines running applications. Each node can be a physical machine or a virtual machine. The node’scapacity (its number of CPUs and amount of memory) is defined when the node is created. A cluster comprises:– one or more master nodes (for high availability, typically there will be a number of master nodes)– one or more worker nodes (sometimes known asminions)Connecting to Worker Nodes Using SSHIf you provided a public SSH key when creating the node pool in a cluster, the public key is installed on all worker nodes in the cluster. On UNIX and UNIX-like platforms (including Solaris and Linux), you canthen connect through SSH to the worker nodes using the ssh utility (an SSH client) to perform administrative tasks.Note the following instructions assume the UNIX machine you use to connect to the worker node:Has the ssh utility installed.Has access tothe SSH private key file paired with the SSH public key that was specified when the cluster was created.How to connect to worker nodes using SSH depends on whether you specified public or private subnets for the worker nodes when defining the node poolsin the cluster.Connecting to Worker Nodes in Public Subnets Using SSHBefore you can connect to a worker node in a public subnet using SSH, you must define an ingress rule in the subnet’s security list to allow SSH access. The ingress rule must allow access to port 22 on worker nodes from source 0.0.0.0/0 and any source port To connect to a worker node in a public subnet through SSH from a UNIX machine using the ssh utility:1- Find out the IP address of the worker node to which you want to connect. You can do this in a number of ways:Using kubectl. If you haven’t already done so, follow the steps to set up the cluster’s kubeconfig configuration file and (if necessary) set the KUBECONFIG environment variable to point to the file. Note that you must set upyour own kubeconfig file. You cannot access a cluster using a kubeconfig file that a different user set up.See Setting Up Cluster Access. Then in a terminal window, enter kubectl get nodes to see the public IP addresses of worker nodes in node pools in the cluster.Using the Console. In the Console, display the Cluster List page and then select the cluster to which the worker node belongs. On the Node Pools tab, click the name of the node pool to which the worker node belongs. On the Nodes tab, you see the public IP address of every worker node in the node pool.Using the REST API. Use the ListNodePools operation to see the public IP addresses of worker nodes in a node pool.2- In the terminal window, enter ssh opc@<node_ip_address> to connect to the worker node, where <node_ip_address> isthe IP address of the worker node that you made a note of earlier. For example, you might enter ssh opc@192.0.2.254.Note that if the SSH private key is not stored in the file or in the path that the ssh utility expects (for example, the ssh utility mightexpect the private key to be stored in ~/.ssh/id_rsa), you must explicitly specify the private key filename and location in one of two ways:Use the -i option to specify the filename and location of the private key. For example, ssh -i~/.ssh/my_keys/my_host_key_filename opc@192.0.2.254Add the private key filename and location to an SSH configuration file, either the client configuration file (~/.ssh/config) if it exists, or the system-wide client configuration file (/etc/ssh/ssh_config). For example, youmight add the following:Host 192.0.2.254 IdentityFile ~/.ssh/my_keys/my_host_key_filenameFor more about the ssh utility’s configuration file, enter man ssh_config Note also that permissions on the private key file must allow you read/write/execute access, but prevent other users from accessing the file. For example, to set appropriate permissions, you might enter chmod 600~/.ssh/my_keys/my_host_key_filename. If permissions are not set correctly and the private key file is accessible to other users, thessh utility will simply ignore the private key file.References:https://docs.cloud.oracle.com/en-us/iaas/Content/ContEng/Tasks/contengconnectingworkernodesusingssh.htmNEW QUESTION 45Which is NOT a supported SDk Oracle Cloud Infrastructure (OCI)?  Go SDK  Java SDK  NET SDK  Ruby SDK  Python SDK Explanationhttps://docs.cloud.oracle.com/en-us/iaas/Content/API/Concepts/sdks.htm* Software Development Kits (SDKs)Build and deploy apps that integrate with Oracle Cloud Infrastructure services. Each SDK provides the tools you need to develop an app, including code samples and documentation tocreate, test, and troubleshoot. In addition, if you want to contribute to the development of the SDKs, they are all open source and available on GitHub.* SDK for Java* Python SDK* Ruby SDK* Go SDKhttps://docs.cloud.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htmNEW QUESTION 46Which statements is incorrect with regards to the Oracle Cloud Infrastructure (OCI) Notifications service?  Notification topics may be assigned as the action performed by an OCI Events configuration.  OCI Alarms can be configured to publish to a notification topic when triggered.  An OCI function may subscribe to anotification topic.  A subscription can forward notifications to an HTTPS endpoint.  A subscription can integrate with PagerDuty events.  It may be used to receive an email each time an OCI Autonomous Database backup is completed. ExplanationNotification service supports subscriptions topics: E-Mail, Function, Https, PagerDuty and SlackAlarms: Notifications sends alarm messages when alarms are breached. The alarm message is sent to the topic specified in the alarm. For example, an alarm message might be configured for high CPU usage.See Managing Alarms.References:https://docs.cloud.oracle.com/en-us/iaas/Content/Notification/Concepts/notificationoverview.htmNEW QUESTION 47You have deployed a Python application on Oracle Cloud Infrastructure Container Engine for Kubernetes.However, during testing you found a bug that you rectified and created a new Docker image. You need to make sure that if this new Image doesn’t work then you can roll back to the previous version.Using kubectl, which deployment strategies should you choose?  Rolling Update  Canary Deployment  Blue/Green Deployment  A/B Testing ExplanationUsing Blue-Green Deployment to Reduce Downtime and Risk:>Blue-green deployment is a technique that reduces downtime and risk by running two identical production environments called Blue and Green. At any time, only one of the environments is live, with the live environment serving all production traffic. For this example, Blue is currently live and Green is idle.This technique can eliminate downtime due to app deployment. In addition, blue-green deployment reduces risk: if something unexpected happens with your new version on Green, you can immediately roll back to the last version by switching back to Blue.>Canary deployments are a pattern for rolling out releases to a subset of users or servers. The idea is to first deploy the change to a small subset of servers, test it, and then roll the change out to the rest of the servers.The canary deployment serves as an early warning indicator with less impact on downtime: if the canary deployment fails, the rest of the servers aren’t impacted.>A/B testing is a way to compare two versions of a singlevariable, typically by testing a subject’s response to variant A against variant B, and determining which of the two variants is more effective>Rolling update offers a way to deploy the new version of your application gradually across your cluster.References:https://docs.cloudfoundry.org/devguide/deploy-apps/blue-green.htmlNEW QUESTION 48With the volume of communication that can happen betweendifferent components in cloud-native applications, it is vital to not only test functionality, but also service resiliency.Which statement is true with regards to service resiliency?  Resiliency is about recovering from failures without downtime or dataloss.  A goal of resiliency is not to bring a service to a functioning state after a failure.  Resiliency testing can be only done in a test environment.  Resiliency is about avoiding failures. ExplanationImplement resilient applications:Resiliency is the ability to (recover) from failures and continue to function. It isn’t about avoiding failures but accepting the fact that failures will happen and responding to them in a way that avoids downtime or data loss. The goal of resiliency is to return the application to a fully functioning state after a failure.References:https://docs.microsoft.com/en-us/dotnet/architecture/microservices/implement-resilient-applications/NEW QUESTION 49You are building a container image and pushing it to the Oracle Cloud Infrastructure Registry (OCIR). You need to make sure that these get deleted from the repository.Which action should you take?  Create a group and assign a policy to perform lifecycle operations on images.  Set global policy of image retention to “Retain All Images”.  In your compartment, write a policy to limit accessto the specific repository.  Edit the tenancy global retention policy. ExplanationDeleting an ImageWhen you no longer need an old image or you simply want to clean up the list of image tags in a repository, you can delete images from Oracle Cloud Infrastructure Registry.Your permissions control the images in Oracle Cloud Infrastructure Registry that you can delete. You can delete images from repositories you’ve created, and from repositories that the groups to which you belong have been granted accessby identity policies. If you belong to the Administrators group, you can delete images from any repository in the tenancy.Note that as well deleting individual images , you can set up image retention policies to delete images automatically based on selection criteria you specify (see Retaining and Deleting Images Using Retention Policies).Note:In each region in a tenancy, there’s a global image retention policy. The global image retention policy’s default selection criteria retain all images so that no images are automaticallydeleted. However, you can change the global image retention policy so that images are deleted if they meet the criteria you specify. A region’s global image retention policy applies to all repositories in the region, unless it is explicitly overridden byone or more custom image retention policies.You can set up custom image retention policies to override the global image retention policy with different criteria for specific repositories in a region. Having created a custom image retention policy, you apply the custom retention policy to a repository by adding the repository to the policy. The global image retention policy no longer applies to repositories that you add to a custom retention policy.https://docs.cloud.oracle.com/en-us/iaas/Content/Registry/Tasks/registrymanagingimageretention.htm#:~:text=InNEW QUESTION 50Which two statements are true for service choreography?  Service choreographer is responsible for invoking other services.  Services involved in choreography communicate through messages/messaging systems.  Service choreography relies on a central coordinator.  Service choreography should not useevents for communication.  Decision logic in service choreography is distributed. ExplanationService ChoreographyService choreography is a global description of the participating services, which is defined by exchange of messages, rules of interactionand agreements between two or more endpoints. Choreography employs a decentralized approach for service composition. the decision logic is distributed, with no centralized point.Choreography, in contrast, does not rely on a central coordinator. and all participants in the choreography need to be aware of the business process, operations to execute, messages to exchange, and the timing of message exchanges.References:https://stackoverflow.com/questions/4127241/orchestration-vs-choreography/33316988NEW QUESTION 51What can you use to dynamically make Kubernetes resources discoverable topublic DNS servers?  ExternalDNS  CoreDNS  DynDNS  kubeDNS ExplanationSetting up ExternalDNS for Oracle Cloud Infrastructure (OCI):Inspired by Kubernetes DNS, Kubernetes’ cluster-internal DNS server, ExternalDNS makes Kubernetes resources discoverable via public DNS servers. Like KubeDNS, it retrieves a list of resources (Services, Ingresses, etc.) from the Kubernetes API to determine a desired listof DNS records.In a broader sense, ExternalDNS allows you to control DNS records dynamically via Kubernetes resources in a DNS provider-agnostic way Deploy ExternalDNS Connect your kubectl client to the cluster you want to test ExternalDNS with. We first need to create a config file containing the information needed to connect with the OCI API.Create a new file (oci.yaml) and modify the contents to match the example below. Be sure to adjust the values to match your own credentials:auth:region:us-phoenix-1tenancy: ocid1.tenancy.oc1…user: ocid1.user.oc1…key: |—–BEGIN RSA PRIVATE KEY—–—–END RSA PRIVATE KEY—–fingerprint: af:81:71:8e…compartment: ocid1.compartment.oc1…References:https://github.com/kubernetes-sigs/external-dns/blob/master/README.mdhttps://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/oracle.mdNEW QUESTION 52Which two statements are true for serverless computing and serverless architectures?  Long running tasks are perfectly suited for serverless  Serverless function state should never be stored externally  Application DevOps team is responsible for scaling  Serverless function execution is fully managed by a third party  Applications running on a FaaS (Functions as a Service) platform ExplanationOracle Functions is a fully managed,multi-tenant, highly scalable, on-demand, Functions-as-a-Service platform. It is built on enterprise-grade Oracle Cloud Infrastructure and powered by the Fn Project open source engine. Use Oracle Functions (sometimes abbreviated to just Functions) when youwant to focus on writing code to meet business needs.The serverless and elastic architecture of Oracle Functions means there’s no infrastructure administration or software administration for you to perform. You don’t provision or maintain compute instances, and operating system software patches and upgrades are applied automatically. Oracle Functions simply ensures your app is highly-available, scalable, secure, and monitored Applications built with a serverless infrastructure will scale automatically asthe user base grows or usage increases. If a function needs to be run in multiple instances, the vendor’s servers will start up, run, and end them as they are needed.Oracle Functions is based on Fn Project. Fn Project is an open source, container native,serverless platform that can be run anywhere – any cloud or on-premises.Serverless architectures are not built for long-running processes. This limits the kinds of applications that can cost-effectively run in a serverless architecture. Because serverlessproviders charge for the amount of time code is running, it may cost more to run an application with long-running processes in a serverless infrastructure compared to a traditional one.https://docs.cloud.oracle.com/en-us/iaas/Content/Functions/Concepts/functionsconcepts.htmhttps://www.cloudflare.com/learning/serverless/why-use-serverless/NEW QUESTION 53You encounter an unexpected error when invoking the Oracle Function named “myfunction” in application“myapp”. Which can you use to get more information on the error?  fn –debug invoke myapp myfunction  DEBOG=1 fn invoke myapp myfunction  fn –verbose invoke myapp myfunction  Call Oracle support with your error message ExplanationTroubleshooting Oracle FunctionsIf you encounter an unexpected error when using an Fn Project CLI command, you can find out more about the problem by starting the command with the string DEBUG=1 and running the command again. For example:$ DEBUG=1 fn invoke helloworld-app helloworld-funcNote that DEBUG=1 must appear before the command, and that DEBUG must be in upper case.NEW QUESTION 54Which testingapproaches is a must for achieving high velocity of deployments and release of cloud-native applications?  Integration testing  A/B testing  Automated testing  Penetration testing ExplanationOracle Cloud Infrastructure provides a number of DevOps tools and plug-ins for working with Oracle Cloud Infrastructure services. These can simplify provisioning and managing infrastructure or enable automated testing and continuous delivery.A/B TestingWhile A/B testing can be combined with either canary orblue-green deployments, it is a very different thing.A/B testing really targets testing the usage behavior of a service or feature and is typically used to validate a hypothesis or to measure two versions of a service or feature and how they stack up against each other in terms of performance, discoverability and usability. A/B testing often leverages feature flags (feature toggles), which allow you to dynamically turn features on and off.Integration TestingIntegration tests are also known as end-to-end(e2e) tests. These are long-running tests that exercise the system in the way it is intended to be used in production. These are the most valuable tests in demonstrating reliability and thus increasing confidence.Penetration TestingOracle regularly performs penetration and vulnerability testing and security assessments against the Oracle cloud infrastructure, platforms, and applications. These tests are intended to validate and improve the overall security of Oracle Cloud Services.References:https://docs.cloud.oracle.com/en-us/iaas/Content/API/Concepts/devopstools.htm Loading … Oracle 1Z0-1084-21 Exam Syllabus Topics: TopicDetailsTopic 1Developing Cloud Native Applications Manage multiple environments (dev, teststage, prod)Topic 2Securing Cloud Native Applications Use the Defense-in-depth approachTopic 3Configure and Use Secret Management Operating Cloud Native Applications Cloud Native FundamentalsTopic 4Perform Tasks around Monitoring, Observability, and Alerting Explain Distributed Computing   Dumps for Free 1Z0-1084-21 Practice Exam Questions: https://www.braindumpsit.com/1Z0-1084-21_real-exam.html --------------------------------------------------- Images: https://blog.braindumpsit.com/wp-content/plugins/watu/loading.gif https://blog.braindumpsit.com/wp-content/plugins/watu/loading.gif --------------------------------------------------- --------------------------------------------------- Post date: 2022-12-08 12:00:07 Post date GMT: 2022-12-08 12:00:07 Post modified date: 2022-12-08 12:00:07 Post modified date GMT: 2022-12-08 12:00:07