This page was exported from IT Certification Exam Braindumps [ http://blog.braindumpsit.com ] Export date:Sun Oct 6 16:24:49 2024 / +0000 GMT ___________________________________________________ Title: BraindumpsIT DP-100 Dumps PDF - 100% Passing Guarantee [Q73-Q91] --------------------------------------------------- BraindumpsIT DP-100 Dumps PDF - 100% Passing Guarantee DP-100 Braindumps Real Exam Updated on May 04, 2022 with 266 Questions What Next after This Certification? The certification obtained from passing DP-100 exam is proof of intermediate skills in data science since it is within the associate hierarchy. Apart from applying for and getting a job in a preferred company, you can also expand your skills in an area of interest. With the role-based scheme, there are expert certifications awaiting you to earn. They include the Microsoft Certified: Azure Solutions Architect Expert and the Microsoft Certified: Azure DevOps Engineer Expert. The dream of becoming a highly skilled data scientist can turn into a reality with the help of the Microsoft DP-100 exam. This exam tries to impart an associate-level understanding of data science and machine learning with an aim to generate a skilled workforce of data scientists. Career Opportunities & Salary Outlook The demand for the certified professionals is on the increase. As an Azure Data Scientist, you can explore many employment opportunities. So, if you want to boost your career potential, pursuing the Microsoft Certified: Azure Data Scientist Associate certification is the right step. To get this certificate, you have to pass the qualifying exam, which is Microsoft DP-100. With this associate-level certification, you can get the positions, such as a Data Scientist, a Senior Data Scientist, a Data Science Manager, and a Data Science Director. The average salary for these job roles is $135,000 per annum. With advanced experience, the specialists can earn up to $170,000 per year.   Q73. You must store data in Azure Blob Storage to support Azure Machine Learning.You need to transfer the data into Azure Blob Storage.What are three possible ways to achieve the goal? Each correct answer presents a complete solution.NOTE: Each correct selection is worth one point.  Bulk Insert SQL Query  AzCopy  Python script  Azure Storage Explorer  Bulk Copy Program (BCP) Explanation/Reference:Explanation:You can move data to and from Azure Blob storage using different technologies:Azure Storage-ExplorerAzCopyPythonSSISReferences:https://docs.microsoft.com/en-us/azure/machine-learning/team-data-science-process/move-azure-blobQ74. You need to select a feature extraction method.Which method should you use?  Mutual information  Mood’s median test  Kendall correlation  Permutation Feature Importance In statistics, the Kendall rank correlation coefficient, commonly referred to as Kendall’s tau coefficient (after the Greek letter), is a statistic used to measure the ordinal association between two measured quantities.It is a supported method of the Azure Machine Learning Feature selection.Scenario: When you train a Linear Regression module using a property dataset that shows data for property prices for a large city, you need to determine the best features to use in a model. You can choose standard metrics provided to measure performance before and after the feature importance process completes. You must ensure that the distribution of the features across multiple training models is consistent.References:https://docs.microsoft.com/en-us/azure/machine-learning/studio-module-reference/feature-selection-modulesQ75. You are performing clustering by using the K-means algorithm.You need to define the possible termination conditions.Which three conditions can you use? Each correct answer presents a complete solution.NOTE: Each correct selection is worth one point.  A fixed number of iterations is executed.  The residual sum of squares (RSS) rises above a threshold.  The sum of distances between centroids reaches a maximum.  The residual sum of squares (RSS) falls below a threshold.  Centroids do not change between iterations. References:https://docs.microsoft.com/en-us/azure/machine-learning/studio-module-reference/k-means-clusteringhttps://nlp.stanford.edu/IR-book/html/htmledition/k-means-1.htmlQ76. You have a Python data frame named salesData in the following format:The data frame must be unpivoted to a long data format as follows:You need to use the pandas.melt() function in Python to perform the transformation.How should you complete the code segment? To answer, select the appropriate options in the answer area.NOTE: Each correct selection is worth one point. ExplanationBox 1: dataFrameSyntax: pandas.melt(frame, id_vars=None, value_vars=None, var_name=None, value_name=’value’, col_level=None)[source] Where frame is a DataFrame Box 2: shop Paramter id_vars id_vars : tuple, list, or ndarray, optional Column(s) to use as identifier variables.Box 3: [‘2017′,’2018’]value_vars : tuple, list, or ndarray, optionalColumn(s) to unpivot. If not specified, uses all columns that are not set as id_vars.Example:df = pd.DataFrame({‘A’: {0: ‘a’, 1: ‘b’, 2: ‘c’},‘B’: {0: 1, 1: 3, 2: 5},‘C’: {0: 2, 1: 4, 2: 6}})pd.melt(df, id_vars=[‘A’], value_vars=[‘B’, ‘C’])A variable value0 a B 11 b B 32 c B 53 a C 24 b C 45 c C 6References:https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.melt.htmlQ77. You have a comma-separated values (CSV) file containing data from which you want to train a classification model.You are using the Automated Machine Learning interface in Azure Machine Learning studio to train the classification model. You set the task type to Classification.You need to ensure that the Automated Machine Learning process evaluates only linear models.What should you do?  Add all algorithms other than linear ones to the blocked algorithms list.  Set the Exit criterion option to a metric score threshold.  Clear the option to perform automatic featurization.  Clear the option to enable deep learning.  Set the task type to Regression. Automatic featurization can fit non-linear models.Reference:https://econml.azurewebsites.net/spec/estimation/dml.htmlhttps://docs.microsoft.com/en-us/azure/machine-learning/how-to-use-automated-ml-for-ml-models Develop models Testlet 1 Case study Overview You are a data scientist in a company that provides data science for professional sporting events. Models will use global and local market data to meet the following business goals:* Understand sentiment of mobile device users at sporting events based on audio from crowd reactions.* Assess a user’s tendency to respond to an advertisement.* Customize styles of ads served on mobile devices.* Use video to detect penalty eventsCurrent environment* Media used for penalty event detection will be provided by consumer devices. Media may include images and videos captured during the sporting event and shared using social media. The images and videos will have varying sizes and formats.* The data available for model building comprises of seven years of sporting event media. The sporting event media includes; recorded video transcripts or radio commentary, and logs from related social media feeds captured during the sporting events.* Crowd sentiment will include audio recordings submitted by event attendees in both mono and stereo formats.Penalty detection and sentiment* Data scientists must build an intelligent solution by using multiple machine learning models for penalty event detection.* Data scientists must build notebooks in a local environment using automatic feature engineering and model building in machine learning pipelines.* Notebooks must be deployed to retrain by using Spark instances with dynamic worker allocation.* Notebooks must execute with the same code on new Spark instances to recode only the source of the data.* Global penalty detection models must be trained by using dynamic runtime graph computation during training.* Local penalty detection models must be written by using BrainScript.* Experiments for local crowd sentiment models must combine local penalty detection data.* Crowd sentiment models must identify known sounds such as cheers and known catch phrases. Individual crowd sentiment models will detect similar sounds.* All shared features for local models are continuous variables.* Shared features must use double precision. Subsequent layers must have aggregate running mean and standard deviation metrics available.AdvertisementsDuring the initial weeks in production, the following was observed:* Ad response rated declined.* Drops were not consistent across ad styles.* The distribution of features across training and production data are not consistent Analysis shows that, of the 100 numeric features on user location and behavior, the 47 features that come from location sources are being used as raw features. A suggested experiment to remedy the bias and variance issue is to engineer 10 linearly uncorrelated features.* Initial data discovery shows a wide range of densities of target states in training data used for crowd sentiment models.* All penalty detection models show inference phases using a Stochastic Gradient Descent (SGD) are running too slow.* Audio samples show that the length of a catch phrase varies between 25%-47% depending on region* The performance of the global penalty detection models shows lower variance but higher bias when comparing training and validation sets. Before implementing any feature changes, you must confirm the bias and variance using all training and validation cases.* Ad response models must be trained at the beginning of each event and applied during the sporting event.* Market segmentation models must optimize for similar ad response history.* Sampling must guarantee mutual and collective exclusively between local and global segmentation models that share the same features.* Local market segmentation models will be applied before determining a user’s propensity to respond to an advertisement.* Ad response models must support non-linear boundaries of features.* The ad propensity model uses a cut threshold is 0.45 and retrains occur if weighted Kappa deviated from0.1 +/- 5%.* The ad propensity model uses cost factors shown in the following diagram:* The ad propensity model uses proposed cost factors shown in the following diagram:* Performance curves of current and proposed cost factor scenarios are shown in the following diagram:Q78. You are creating a binary classification by using a two-class logistic regression model.You need to evaluate the model results for imbalance.Which evaluation metric should you use?  Relative Absolute Error  AUC Curve  Mean Absolute Error  Relative Squared Error  Accuracy  Root Mean Square Error One can inspect the true positive rate vs. the false positive rate in the Receiver Operating Characteristic (ROC) curve and the corresponding Area Under the Curve (AUC) value. The closer this curve is to the upper left corner, the better the classifier’s performance is (that is maximizing the true positive rate while minimizing the false positive rate). Curves that are close to the diagonal of the plot, result from classifiers that tend to make predictions that are close to random guessing.References:https://docs.microsoft.com/en-us/azure/machine-learning/studio/evaluate-model-performance#evaluating-a- binary-classification-modelQ79. You are using the Azure Machine Learning Service to automate hyperparameter exploration of your neural network classification model.You must define the hyperparameter space to automatically tune hyperparameters using random sampling according to following requirements:The learning rate must be selected from a normal distribution with a mean value of 10 and a standard deviation of 3.Batch size must be 16, 32 and 64.Keep probability must be a value selected from a uniform distribution between the range of 0.05 and 0.1.You need to use the param_sampling method of the Python API for the Azure Machine Learning Service.How should you complete the code segment? To answer, select the appropriate options in the answer area.NOTE: Each correct selection is worth one point. Explanation:In random sampling, hyperparameter values are randomly selected from the defined search space. Random sampling allows the search space to include both discrete and continuous hyperparameters.Example:from azureml.train.hyperdrive import RandomParameterSamplingparam_sampling = RandomParameterSampling( {“learning_rate”: normal(10, 3),“keep_probability”: uniform(0.05, 0.1),“batch_size”: choice(16, 32, 64)}Reference:https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-tune-hyperparametersQ80. You plan to use the Hyperdrive feature of Azure Machine Learning to determine the optimal hyperparameter values when training a model.You must use Hyperdrive to try combinations of the following hyperparameter values. You must not apply an early termination policy.learning_rate: any value between 0.001 and 0.1* batch_size: 16, 32, or 64You need to configure the sampling method for the Hyperdrive experiment Which two sampling methods can you use? Each correct answer is a complete solution.NOTE: Each correct selection is worth one point.  Grid sampling  No sampling  Bayesian sampling  Random sampling C: Bayesian sampling is based on the Bayesian optimization algorithm and makes intelligent choices on the hyperparameter values to sample next. It picks the sample based on how the previous samples performed, such that the new sample improves the reported primary metric.Bayesian sampling does not support any early termination policyExample:from azureml.train.hyperdrive import BayesianParameterSamplingfrom azureml.train.hyperdrive import uniform, choiceparam_sampling = BayesianParameterSampling( {“learning_rate”: uniform(0.05, 0.1),“batch_size”: choice(16, 32, 64, 128)})D: In random sampling, hyperparameter values are randomly selected from the defined search space. Random sampling allows the search space to include both discrete and continuous hyperparameters.Incorrect Answers:B: Grid sampling can be used if your hyperparameter space can be defined as a choice among discrete values and if you have sufficient budget to exhaustively search over all values in the defined search space. Additionally, one can use automated early termination of poorly performing runs, which reduces wastage of resources.Example, the following space has a total of six samples:from azureml.train.hyperdrive import GridParameterSamplingfrom azureml.train.hyperdrive import choiceparam_sampling = GridParameterSampling( {“num_hidden_layers”: choice(1, 2, 3),“batch_size”: choice(16, 32)})Reference:https://docs.microsoft.com/en-us/azure/machine-learning/how-to-tune-hyperparametersQ81. You need to select a feature extraction method.Which method should you use?  Mutual information  Mood’s median test  Kendall correlation  Permutation Feature Importance In statistics, the Kendall rank correlation coefficient, commonly referred to as Kendall’s tau coefficient (after the Greek letter τ), is a statistic used to measure the ordinal association between two measured quantities.It is a supported method of the Azure Machine Learning Feature selection.Note: Both Spearman’s and Kendall’s can be formulated as special cases of a more general correlation coefficient, and they are both appropriate in this scenario.Scenario: The MedianValue and AvgRoomsInHouse columns both hold data in numeric format. You need to select a feature selection algorithm to analyze the relationship between the two columns in more detail.References:https://docs.microsoft.com/en-us/azure/machine-learning/studio-module-reference/feature-selection-modulesQ82. You create a datastore named training_data that references a blob container in an Azure Storage account. The blob container contains a folder named csv_files in which multiple comma-separated values (CSV) files are stored.You have a script named train.py in a local folder named ./script that you plan to run as an experiment using an estimator. The script includes the following code to read data from the csv_files folder:You have the following script.You need to configure the estimator for the experiment so that the script can read the data from a data reference named data_ref that references the csv_files folder in the training_data datastore.Which code should you use to configure the estimator?  Option A  Option B  Option C  Option D  Option E Besides passing the dataset through the inputs parameter in the estimator, you can also pass the dataset through script_params and get the data path (mounting point) in your training script via arguments. This way, you can keep your training script independent of azureml-sdk. In other words, you will be able use the same training script for local debugging and remote training on any cloud platform.Example:from azureml.train.sklearn import SKLearnscript_params = {# mount the dataset on the remote compute and pass the mounted path as an argument to the training script‘–data-folder’: mnist_ds.as_named_input(‘mnist’).as_mount(),‘–regularization’: 0.5}est = SKLearn(source_directory=script_folder,script_params=script_params,compute_target=compute_target,environment_definition=env,entry_script=’train_mnist.py’)# Run the experimentrun = experiment.submit(est)run.wait_for_completion(show_output=True)Incorrect Answers:A: Pandas DataFrame not used.Reference:https://docs.microsoft.com/es-es/azure/machine-learning/how-to-train-with-datasetsQ83. You need to implement early stopping criteria as suited in the model training requirements.Which three code segments should you use to develop the solution? To answer, move the appropriate code segments from the list of code segments to the answer area and arrange them in the correct order.NOTE: More than one order of answer choices is correct. You will receive credit for any of the correct orders you select. Q84. You need to define a modeling strategy for ad response.Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Explanation:Step 1: Implement a K-Means Clustering modelStep 2: Use the cluster as a feature in a Decision jungle model.Decision jungles are non-parametric models, which can represent non-linear decision boundaries.Step 3: Use the raw score as a feature in a Score Matchbox Recommender model The goal of creating a recommendation system is to recommend one or more “items” to “users” of the system. Examples of an item could be a movie, restaurant, book, or song. A user could be a person, group of persons, or other entity with item preferences.Scenario:Ad response rated declined.Ad response models must be trained at the beginning of each event and applied during the sporting event.Market segmentation models must optimize for similar ad response history.Ad response models must support non-linear boundaries of features.References:https://docs.microsoft.com/en-us/azure/machine-learning/studio-module-reference/multiclass-decision-junglehttps://docs.microsoft.com/en-us/azure/machine-learning/studio-module-reference/score-matchbox-recommenderQ85. Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.You train and register a machine learning model.You plan to deploy the model as a real-time web service. Applications must use key-based authentication to use the model.You need to deploy the web service.Solution:Create an AksWebservice instance.Set the value of the auth_enabled property to False.Set the value of the token_auth_enabled property to True.Deploy the model to the service.Does the solution meet the goal?  Yes  No Instead use only auth_enabled = TRUENote: Key-based authentication.Web services deployed on AKS have key-based auth enabled by default. ACI-deployed services have key-based auth disabled by default, but you can enable it by setting auth_enabled = TRUE when creating the ACI web service. The following is an example of creating an ACI deployment configuration with key-based auth enabled.deployment_config <- aci_webservice_deployment_config(cpu_cores = 1,memory_gb = 1,auth_enabled = TRUE)Reference:https://azure.github.io/azureml-sdk-for-r/articles/deploying-models.htmlQ86. You plan to create a speech recognition deep learning model.The model must support the latest version of Python.You need to recommend a deep learning framework for speech recognition to include in the Data Science Virtual Machine (DSVM).What should you recommend?  Rattle  TensorFlow  Weka  Scikit-learn ExplanationExplanation:TensorFlow is an open source library for numerical computation and large-scale machine learning. It uses Python to provide a convenient front-end API for building applications with the framework TensorFlow can train and run deep neural networks for handwritten digit classification, image recognition, word embeddings, recurrent neural networks, sequence-to-sequence models for machine translation, natural language processing, and PDE (partial differential equation) based simulations.Incorrect Answers:A: Rattle is the R analytical tool that gets you started with data analytics and machine learning.C: Weka is used for visual data mining and machine learning software in Java.D: Scikit-learn is one of the most useful library for machine learning in Python. It is on NumPy, SciPy and matplotlib, this library contains a lot of effiecient tools for machine learning and statistical modeling including classification, regression, clustering and dimensionality reduction.Reference:https://www.infoworld.com/article/3278008/what-is-tensorflow-the-machine-learning-library-explained.htmlQ87. You are performing sentiment analysis using a CSV file that includes 12,000 customer reviews written in a short sentence format. You add the CSV file to Azure Machine Learning Studio and configure it as the starting point dataset of an experiment. You add the Extract N-Gram Features from Text module to the experiment to extract key phrases from the customer review column in the dataset.You must create a new n-gram dictionary from the customer review text and set the maximum n-gram size to trigrams.What should you select? To answer, select the appropriate options in the answer area.NOTE: Each correct selection is worth one point. ExplanationVocabulary mode: CreateFor Vocabulary mode, select Create to indicate that you are creating a new list of n-gram features.N-Grams size: 3For N-Grams size, type a number that indicates the maximum size of the n-grams to extract and store. For example, if you type 3, unigrams, bigrams, and trigrams will be created.Weighting function: Leave blankThe option, Weighting function, is required only if you merge or update vocabularies. It specifies how terms in the two vocabularies and their scores should be weighted against each other.References:https://docs.microsoft.com/en-us/azure/machine-learning/studio-module-reference/extract-n-gram-features-from-Q88. You are creating a machine learning model in Python. The provided dataset contains several numerical columns and one text column.*Biker*Cars*Vans*BoatsYou are building a regression model using the scikit- learn Python package.You need to transform the text data to be compatible with the scikit-learn Python package How should you complete the code segment? To answer, select the appropriate options in the answer area.NOTE: Each correct selection is worth one point. Q89. You are performing feature engineering on a dataset.You must add a feature named CityName and populate the column value with the text London.You need to add the new feature to the dataset.Which Azure Machine Learning Studio module should you use?  Extract N-Gram Features from Text  Edit Metadata  Preprocess Text  Apply SQL Transformation Typical metadata changes might include marking columns as features.Reference:https://docs.microsoft.com/en-us/azure/machine-learning/studio-module-reference/edit-metadata Develop models Testlet 1 Case study Overview You are a data scientist in a company that provides data science for professional sporting events. Models will use global and local market data to meet the following business goals:* Understand sentiment of mobile device users at sporting events based on audio from crowd reactions.* Assess a user’s tendency to respond to an advertisement.* Customize styles of ads served on mobile devices.* Use video to detect penalty eventsCurrent environment* Media used for penalty event detection will be provided by consumer devices. Media may include images and videos captured during the sporting event and shared using social media. The images and videos will have varying sizes and formats.* The data available for model building comprises of seven years of sporting event media. The sporting event media includes; recorded video transcripts or radio commentary, and logs from related social media feeds captured during the sporting events.* Crowd sentiment will include audio recordings submitted by event attendees in both mono and stereo formats.Penalty detection and sentiment* Data scientists must build an intelligent solution by using multiple machine learning models for penalty event detection.* Data scientists must build notebooks in a local environment using automatic feature engineering and model building in machine learning pipelines.* Notebooks must be deployed to retrain by using Spark instances with dynamic worker allocation.* Notebooks must execute with the same code on new Spark instances to recode only the source of the data.* Global penalty detection models must be trained by using dynamic runtime graph computation during training.* Local penalty detection models must be written by using BrainScript.* Experiments for local crowd sentiment models must combine local penalty detection data.* Crowd sentiment models must identify known sounds such as cheers and known catch phrases. Individual crowd sentiment models will detect similar sounds.* All shared features for local models are continuous variables.* Shared features must use double precision. Subsequent layers must have aggregate running mean and standard deviation metrics available.AdvertisementsDuring the initial weeks in production, the following was observed:* Ad response rated declined.* Drops were not consistent across ad styles.* The distribution of features across training and production data are not consistent Analysis shows that, of the 100 numeric features on user location and behavior, the 47 features that come from location sources are being used as raw features. A suggested experiment to remedy the bias and variance issue is to engineer 10 linearly uncorrelated features.* Initial data discovery shows a wide range of densities of target states in training data used for crowd sentiment models.* All penalty detection models show inference phases using a Stochastic Gradient Descent (SGD) are running too slow.* Audio samples show that the length of a catch phrase varies between 25%-47% depending on region* The performance of the global penalty detection models shows lower variance but higher bias when comparing training and validation sets. Before implementing any feature changes, you must confirm the bias and variance using all training and validation cases.* Ad response models must be trained at the beginning of each event and applied during the sporting event.* Market segmentation models must optimize for similar ad response history.* Sampling must guarantee mutual and collective exclusively between local and global segmentation models that share the same features.* Local market segmentation models will be applied before determining a user’s propensity to respond to an advertisement.* Ad response models must support non-linear boundaries of features.* The ad propensity model uses a cut threshold is 0.45 and retrains occur if weighted Kappa deviated from0.1 +/- 5%.* The ad propensity model uses cost factors shown in the following diagram:* The ad propensity model uses proposed cost factors shown in the following diagram:* Performance curves of current and proposed cost factor scenarios are shown in the following diagram:Q90. You have a dataset created for multiclass classification tasks that contains a normalized numerical feature set with 10,000 data points and 150 features.You use 75 percent of the data points for training and 25 percent for testing. You are using the scikit-learn machine learning library in Python. You use X to denote the feature set and Y to denote class labels.You create the following Python data frames:You need to apply the Principal Component Analysis (PCA) method to reduce the dimensionality of the feature set to 10 features in both training and testing sets.How should you complete the code segment? To answer, select the appropriate options in the answer area.NOTE: Each correct selection is worth one point. ExplanationBox 1: PCA(n_components = 10)Need to reduce the dimensionality of the feature set to 10 features in both training and testing sets.Example:from sklearn.decomposition import PCApca = PCA(n_components=2) ;2 dimensionsprincipalComponents = pca.fit_transform(x)Box 2: pcafit_transform(X[, y])fits the model with X and apply the dimensionality reduction on X.Box 3: transform(x_test)transform(X) applies dimensionality reduction to X.References:https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.htmlQ91. You have a dataset that contains over 150 features. You use the dataset to train a Support Vector Machine (SVM) binary classifier.You need to use the Permutation Feature Importance module in Azure Machine Learning Studio to compute a set of feature importance scores for the dataset.In which order should you perform the actions? To answer, move all actions from the list of actions to the answer area and arrange them in the correct order. Explanation:Step 1: Add a Two-Class Support Vector Machine module to initialize the SVM classifier.Step 2: Add a dataset to the experimentStep 3: Add a Split Data module to create training and test dataset.To generate a set of feature scores requires that you have an already trained model, as well as a test dataset.Step 4: Add a Permutation Feature Importance module and connect to the trained model and test dataset.Step 5: Set the Metric for measuring performance property to Classification – Accuracy and then run the experiment.Reference:https://docs.microsoft.com/en-us/azure/machine-learning/studio-module-reference/two-class-support-vector-machinehttps://docs.microsoft.com/en-us/azure/machine-learning/studio-module-reference/permutation-feature-importance Loading … DP-100 Dumps With 100% Verified Q&As - Pass Guarantee or Full Refund: https://www.braindumpsit.com/DP-100_real-exam.html --------------------------------------------------- Images: https://blog.braindumpsit.com/wp-content/plugins/watu/loading.gif https://blog.braindumpsit.com/wp-content/plugins/watu/loading.gif --------------------------------------------------- --------------------------------------------------- Post date: 2022-05-04 10:08:46 Post date GMT: 2022-05-04 10:08:46 Post modified date: 2022-05-04 10:08:46 Post modified date GMT: 2022-05-04 10:08:46