• Forms, various Ways of worship in the early church https://youtu.be/79IKsUSwWQs?si=BkgCEFikWe3sfSJ0
    Forms, various Ways of worship in the early church https://youtu.be/79IKsUSwWQs?si=BkgCEFikWe3sfSJ0
    Love
    1
    0 Comentários 0 Compartilhamentos 523 Visualizações 0 Anterior



  • Data Understanding


    Data Science Methodology

    Essentially, the data understanding section of the data science methodology answers the question:
    Is the data that you collected representative of the problem to be solved?
    . In order to understand the data related to congestive heart failure admissions,
    descriptive statistics needed to be run against the data columns that would become variables in the model.
    1. First, these statistics included hurst, univariates, and statistics on each variable, such as mean, median, minimum, maximum, and standard deviation.
    2. Second, pairwise correlations were used, to see how closely certain variables were related, and which ones, if any, were very highly correlated, meaning that they would be essentially redundant, thus making only one relevant for modeling.
    3. Third, histograms of the variables were examined to understand their distributions.

    Histograms are a good way to understand how values or a variable are distributed, and which sorts of data preparation may be needed to make the variable more useful in a model.
    For example, for a categorical variable that has too many distinct values to be informative in a model, the histogram would help them decide how to consolidate those values.
    The univariates, statistics, and histograms are also used to assess data quality. From the information provided, certain values can be re-coded or perhaps even dropped if necessary, such as when a certain variable has missing values.
    The question then becomes, does "missing" mean anything?
    Sometimes a missing value might mean "no", or "0" (zero), or at other times it simply means "we don't know" or, if a variable contains invalid or misleading values, such as a numeric variable called "age" that contains 0 to 100 and also 999, where that "triple-9" actually means "missing",
    but would be treated as a valid value unless we corrected it.
    Initially, the meaning of congestive heart failure admission was decided on the basis of a primary diagnosis of congestive heart failure.
    But working through the data understanding stage revealed that the initial definition was not capturing all of the congestive heart failure admissions that were expected, based on clinical experience.
    This meant looping back to the data collection stage and adding secondary and tertiary diagnoses, and building a more comprehensive definition of congestive heart failure admission.
    This is just one example of the interactive processes in the methodology.
    The more one works with the problem and the data, the more one learns and therefore the more refinement that can be done within the model, ultimately leading to a better solution to the problem.
    This ends the Data Understanding section of this course.




    Okiliong
    What are some potential challenges or issues that can arise during the data understanding stage?


    During the data understanding stage, there are several potential challenges or issues that can arise. Some of them include:

    1.Insufficient or incomplete data: The data collected may not be sufficient or may have missing values, which can affect the accuracy and reliability of the analysis.

    2.Data quality issues: The data may contain errors, outliers, or inconsistencies that need to be addressed before proceeding with the analysis.

    3.Data bias: The data may be biased towards certain groups or may not be representative of the population, leading to biased results and conclusions.

    4.Data privacy and security: Handling sensitive or confidential data requires ensuring proper security measures are in place to protect the data and comply with privacy regulations.

    5.Data compatibility: Data from different sources may have different formats, structures, or units, making it challenging to integrate and analyze them together.

    6.Data interpretation: Understanding the meaning and context of the data can be complex, especially when dealing with complex variables or domain-specific terminology.

    7.Data volume and scalability: Large datasets can pose challenges in terms of storage, processing power, and computational resources required for analysis.

    8.Stakeholder involvement: Involving stakeholders and subject matter experts in the data understanding stage can be challenging, especially if there are communication gaps or conflicting priorities.

    It's important to address these challenges and issues during the data understanding stage to ensure the accuracy and reliability of the subsequent analysis and modeling steps.


    Data Understanding Data Science Methodology Essentially, the data understanding section of the data science methodology answers the question: Is the data that you collected representative of the problem to be solved? . In order to understand the data related to congestive heart failure admissions, descriptive statistics needed to be run against the data columns that would become variables in the model. 1. First, these statistics included hurst, univariates, and statistics on each variable, such as mean, median, minimum, maximum, and standard deviation. 2. Second, pairwise correlations were used, to see how closely certain variables were related, and which ones, if any, were very highly correlated, meaning that they would be essentially redundant, thus making only one relevant for modeling. 3. Third, histograms of the variables were examined to understand their distributions. Histograms are a good way to understand how values or a variable are distributed, and which sorts of data preparation may be needed to make the variable more useful in a model. For example, for a categorical variable that has too many distinct values to be informative in a model, the histogram would help them decide how to consolidate those values. The univariates, statistics, and histograms are also used to assess data quality. From the information provided, certain values can be re-coded or perhaps even dropped if necessary, such as when a certain variable has missing values. The question then becomes, does "missing" mean anything? Sometimes a missing value might mean "no", or "0" (zero), or at other times it simply means "we don't know" or, if a variable contains invalid or misleading values, such as a numeric variable called "age" that contains 0 to 100 and also 999, where that "triple-9" actually means "missing", but would be treated as a valid value unless we corrected it. Initially, the meaning of congestive heart failure admission was decided on the basis of a primary diagnosis of congestive heart failure. But working through the data understanding stage revealed that the initial definition was not capturing all of the congestive heart failure admissions that were expected, based on clinical experience. This meant looping back to the data collection stage and adding secondary and tertiary diagnoses, and building a more comprehensive definition of congestive heart failure admission. This is just one example of the interactive processes in the methodology. The more one works with the problem and the data, the more one learns and therefore the more refinement that can be done within the model, ultimately leading to a better solution to the problem. This ends the Data Understanding section of this course. ​ Okiliong What are some potential challenges or issues that can arise during the data understanding stage? During the data understanding stage, there are several potential challenges or issues that can arise. Some of them include: 1.Insufficient or incomplete data: The data collected may not be sufficient or may have missing values, which can affect the accuracy and reliability of the analysis. 2.Data quality issues: The data may contain errors, outliers, or inconsistencies that need to be addressed before proceeding with the analysis. 3.Data bias: The data may be biased towards certain groups or may not be representative of the population, leading to biased results and conclusions. 4.Data privacy and security: Handling sensitive or confidential data requires ensuring proper security measures are in place to protect the data and comply with privacy regulations. 5.Data compatibility: Data from different sources may have different formats, structures, or units, making it challenging to integrate and analyze them together. 6.Data interpretation: Understanding the meaning and context of the data can be complex, especially when dealing with complex variables or domain-specific terminology. 7.Data volume and scalability: Large datasets can pose challenges in terms of storage, processing power, and computational resources required for analysis. 8.Stakeholder involvement: Involving stakeholders and subject matter experts in the data understanding stage can be challenging, especially if there are communication gaps or conflicting priorities. It's important to address these challenges and issues during the data understanding stage to ensure the accuracy and reliability of the subsequent analysis and modeling steps.
    Love
    1
    3 Comentários 0 Compartilhamentos 2KB Visualizações 0 Anterior
  • Lessons Relevance's, APPLICATIONs of kerygma to the modern Christians https://youtu.be/1HxaeK1bc_w?si=MNfhmnqKc0x9wbBw
    Lessons Relevance's, APPLICATIONs of kerygma to the modern Christians https://youtu.be/1HxaeK1bc_w?si=MNfhmnqKc0x9wbBw
    Like
    1
    0 Comentários 0 Compartilhamentos 701 Visualizações 0 Anterior
  • The life Activities of the early church https://youtu.be/4R0w9jU3r-o?list=PLGyQ_RMbdfNPPHC5Nk8PsvQbBKDkoXLIK
    The life Activities of the early church https://youtu.be/4R0w9jU3r-o?list=PLGyQ_RMbdfNPPHC5Nk8PsvQbBKDkoXLIK
    Like
    1
    0 Comentários 0 Compartilhamentos 563 Visualizações 0 Anterior
  • Piggery Project
    0% $0 Raised of $1575
    I have a piggery project but lack enough funds to construct permanent structure and buy enough feeds thus my pigs are in bad conditions in terms of structure due to bad weather as well as inadequate feeds, i there fore,request any willing individual or company to support my project.
    Like
    1
    0 Comentários 0 Compartilhamentos 247 Visualizações 0 Anterior
  • Like
    1
    1 Comentários 0 Compartilhamentos 239 Visualizações 0 Anterior
  • 1 Comentários 0 Compartilhamentos 475 Visualizações 0 Anterior
  • Like
    1
    1 Comentários 0 Compartilhamentos 496 Visualizações 0 Anterior
  • Real Madrid vs. Atalanta: Extended Highlights | UEFA Super Cup | CBS Sports Golazo
    Real Madrid vs. Atalanta: Extended Highlights | UEFA Super Cup | CBS Sports Golazo
    Like
    Love
    2
    2 Comentários 0 Compartilhamentos 1KB Visualizações 0 Anterior

  • Data Understanding.

    Okiliong
    Can you explain the purpose of the data understanding stage in the data science methodology?


    Certainly! The data understanding stage is a crucial step in the data science methodology.
    Its purpose is to gain a deep understanding of the data that has been collected and determine if it is representative of the problem to be solved. Here are the key objectives of the data understanding stage:

    1.Assess Data Quality:
    The data understanding stage involves evaluating the quality of the collected data.
    This includes checking for missing values, outliers, inconsistencies, and data errors.
    By identifying and addressing data quality issues, we can ensure the reliability and accuracy of the subsequent analysis.

    2.Explore Data Relationships: During this stage, we examine the relationships between different variables in the dataset.
    This includes calculating descriptive statistics, such as mean, median, minimum, maximum, and standard deviation, to understand the distribution of the data. Pairwise correlations are also analyzed to identify variables that are closely related or redundant.

    3.Understand Data Distributions:
    Histograms and other visualization techniques are used to understand the distribution of variables. This helps in identifying patterns, outliers, and potential data transformations that may be required for modeling.

    4.Determine Data Relevance: The data understanding stage helps us determine if the collected data is relevant to the problem at hand.
    It allows us to assess if the data includes all the necessary variables and if any additional data needs to be collected to address potential gaps.

    5.Refine Problem Definition: Through the data understanding stage, we may discover insights or patterns that challenge or refine our initial problem definition. This iterative process helps us gain a deeper understanding of the problem and adjust our approach accordingly.
    By thoroughly understanding the data, we can make informed decisions about data preprocessing, feature engineering, and modeling techniques. This stage sets the foundation for the subsequent stages of the data science methodology, such as data preparation, modeling, and evaluation.

    Okiliong
    How could you use descriptive statistics to assess the quality of the data set?


    Descriptive statistics can be used to assess the quality of a dataset in several ways.
    Here are some examples:

    1.Identifying Missing Values: Descriptive statistics, such as the count of non-missing values, can help identify variables or columns with missing data.
    If a variable has a significantly lower count compared to the total number of observations, it indicates missing values that need to be addressed.

    2.Checking Data Range: Descriptive statistics like minimum and maximum values can provide insights into the range of values for each variable. Unusually large or small values that fall outside the expected range may indicate data entry errors or outliers that need to be investigated.

    3.Assessing Data Distribution: Descriptive statistics such as mean, median, and standard deviation can provide information about the distribution of the data. If the mean and median are significantly different, it suggests the presence of outliers or skewness in the data.

    4.Identifying Data Errors: Descriptive statistics can help identify potential data errors or inconsistencies. For example, if a variable represents age and has a minimum value of -10 or a maximum value of 200, it indicates data entry errors or invalid values that need to be corrected.

    5.Comparing Variables: Descriptive statistics can be used to compare variables and identify potential issues. For example, if two variables are expected to be highly correlated but have very different means or ranges, it may indicate a data quality issue or a problem with data collection.

    6.Assessing Data Completeness: Descriptive statistics can provide insights into the completeness of the data. For example, if a variable has a high count of missing values or a significantly lower count compared to other variables, it suggests data incompleteness that needs to be addressed.
    By using descriptive statistics, data scientists can gain a better understanding of the quality and characteristics of the dataset. This information helps in identifying data quality issues, outliers, and inconsistencies that need to be addressed before proceeding with further analysis and modeling.

    Thanks,

    Data Understanding. Okiliong Can you explain the purpose of the data understanding stage in the data science methodology? Certainly! The data understanding stage is a crucial step in the data science methodology. Its purpose is to gain a deep understanding of the data that has been collected and determine if it is representative of the problem to be solved. Here are the key objectives of the data understanding stage: 1.Assess Data Quality: The data understanding stage involves evaluating the quality of the collected data. This includes checking for missing values, outliers, inconsistencies, and data errors. By identifying and addressing data quality issues, we can ensure the reliability and accuracy of the subsequent analysis. 2.Explore Data Relationships: During this stage, we examine the relationships between different variables in the dataset. This includes calculating descriptive statistics, such as mean, median, minimum, maximum, and standard deviation, to understand the distribution of the data. Pairwise correlations are also analyzed to identify variables that are closely related or redundant. 3.Understand Data Distributions: Histograms and other visualization techniques are used to understand the distribution of variables. This helps in identifying patterns, outliers, and potential data transformations that may be required for modeling. 4.Determine Data Relevance: The data understanding stage helps us determine if the collected data is relevant to the problem at hand. It allows us to assess if the data includes all the necessary variables and if any additional data needs to be collected to address potential gaps. 5.Refine Problem Definition: Through the data understanding stage, we may discover insights or patterns that challenge or refine our initial problem definition. This iterative process helps us gain a deeper understanding of the problem and adjust our approach accordingly. By thoroughly understanding the data, we can make informed decisions about data preprocessing, feature engineering, and modeling techniques. This stage sets the foundation for the subsequent stages of the data science methodology, such as data preparation, modeling, and evaluation. Okiliong How could you use descriptive statistics to assess the quality of the data set? Descriptive statistics can be used to assess the quality of a dataset in several ways. Here are some examples: 1.Identifying Missing Values: Descriptive statistics, such as the count of non-missing values, can help identify variables or columns with missing data. If a variable has a significantly lower count compared to the total number of observations, it indicates missing values that need to be addressed. 2.Checking Data Range: Descriptive statistics like minimum and maximum values can provide insights into the range of values for each variable. Unusually large or small values that fall outside the expected range may indicate data entry errors or outliers that need to be investigated. 3.Assessing Data Distribution: Descriptive statistics such as mean, median, and standard deviation can provide information about the distribution of the data. If the mean and median are significantly different, it suggests the presence of outliers or skewness in the data. 4.Identifying Data Errors: Descriptive statistics can help identify potential data errors or inconsistencies. For example, if a variable represents age and has a minimum value of -10 or a maximum value of 200, it indicates data entry errors or invalid values that need to be corrected. 5.Comparing Variables: Descriptive statistics can be used to compare variables and identify potential issues. For example, if two variables are expected to be highly correlated but have very different means or ranges, it may indicate a data quality issue or a problem with data collection. 6.Assessing Data Completeness: Descriptive statistics can provide insights into the completeness of the data. For example, if a variable has a high count of missing values or a significantly lower count compared to other variables, it suggests data incompleteness that needs to be addressed. By using descriptive statistics, data scientists can gain a better understanding of the quality and characteristics of the dataset. This information helps in identifying data quality issues, outliers, and inconsistencies that need to be addressed before proceeding with further analysis and modeling. Thanks,
    0 Comentários 0 Compartilhamentos 1KB Visualizações 0 Anterior