• Learn python, become a data science engineer.
    Learn python, become a data science engineer.
    Like
    2
    0 Kommentare 0 Anteile 184 Ansichten 0 Vorschau
  • DATA BASE LESSON 1
    DATA BASE LESSON 1
    Like
    4
    0 Kommentare 0 Anteile 310 Ansichten 0 Vorschau
  • DATA BASE LESSON 10
    DATA BASE LESSON 10
    0 Kommentare 0 Anteile 269 Ansichten 0 Vorschau
  • We are excited to announce that our collaboration with Bishop Stuart University has officially begun! Over the next three months, we will be equipping our youth with vital computer skills, including Microsoft packages, graphics designing, video editing, data management, internet literacy, and digital marketing.

    We call upon organizations and well-wishers to join us in this transformative initiative and support our efforts to empower the next generation with the skills they need to thrive in a digital world.

    #EmpoweringYouth #DigitalSkills #GoUseTech #BishopStuartUniversity #TechnologyForChange
    We are excited to announce that our collaboration with Bishop Stuart University has officially begun! Over the next three months, we will be equipping our youth with vital computer skills, including Microsoft packages, graphics designing, video editing, data management, internet literacy, and digital marketing. We call upon organizations and well-wishers to join us in this transformative initiative and support our efforts to empower the next generation with the skills they need to thrive in a digital world. #EmpoweringYouth #DigitalSkills #GoUseTech #BishopStuartUniversity #TechnologyForChange
    Like
    4
    2 Kommentare 0 Anteile 931 Ansichten 0 Vorschau
  • DATA UNDERSTANDING


    Okiliong
    How could you use descriptive statistics to assess the quality of the data set?


    Descriptive statistics can be used to assess the quality of a dataset in several ways. Here are some examples:

    1.Identifying Missing Values: Descriptive statistics, such as the count of non-missing values, can help identify variables or columns with missing data. If a variable has a significantly lower count compared to the total number of observations, it indicates missing values that need to be addressed.

    2.Checking Data Range: Descriptive statistics like minimum and maximum values can provide insights into the range of values for each variable. Unusually large or small values that fall outside the expected range may indicate data entry errors or outliers that need to be investigated.

    3.Assessing Data Distribution: Descriptive statistics such as mean, median, and standard deviation can provide information about the distribution of the data. If the mean and median are significantly different, it suggests the presence of outliers or skewness in the data.

    4.Identifying Data Errors: Descriptive statistics can help identify potential data errors or inconsistencies. For example, if a variable represents age and has a minimum value of -10 or a maximum value of 200, it indicates data entry errors or invalid values that need to be corrected.

    5.Comparing Variables: Descriptive statistics can be used to compare variables and identify potential issues. For example, if two variables are expected to be highly correlated but have very different means or ranges, it may indicate a data quality issue or a problem with data collection.

    6.Assessing Data Completeness: Descriptive statistics can provide insights into the completeness of the data. For example, if a variable has a high count of missing values or a significantly lower count compared to other variables, it suggests data incompleteness that needs to be addressed.
    By using descriptive statistics, data scientists can gain a better understanding of the quality and characteristics of the dataset. This information helps in identifying data quality issues, outliers, and inconsistencies that need to be addressed before proceeding with further analysis and modeling.


    Okiliong
    What are some techniques or methods used to explore data relationships during the data understanding stage?


    During the data understanding stage, several techniques and methods can be used to explore data relationships.
    Here are some commonly used techniques:

    1.Correlation Analysis: Correlation analysis measures the strength and direction of the linear relationship between two variables. It helps identify variables that are closely related and can provide insights into dependencies or associations between variables.

    2.Scatter Plots: Scatter plots are graphical representations that display the relationship between two variables. They can help visualize patterns, trends, or clusters in the data and identify potential relationships or dependencies.

    3.Cross-Tabulation: Cross-tabulation, also known as a contingency table, is used to analyze the relationship between two categorical variables. It provides a tabular summary of the joint distribution of the variables, allowing for the identification of associations or dependencies.

    4.Heatmaps: Heatmaps are visual representations of a matrix of values, where colors are used to represent the magnitude of the values. Heatmaps can be used to visualize the relationships between multiple variables simultaneously, making it easier to identify patterns or clusters.

    5.Dimensionality Reduction Techniques: Dimensionality reduction techniques, such as Principal Component Analysis (PCA) or t-SNE, can be used to explore relationships between variables in high-dimensional datasets. These techniques help visualize the data in lower-dimensional spaces while preserving the most important relationships between variables.

    6.Correlation Matrix: A correlation matrix is a tabular representation that displays the pairwise correlations between multiple variables. It provides a comprehensive overview of the relationships between variables and can help identify highly correlated variables or potential multicollinearity issues.

    7.Network Analysis: Network analysis techniques can be used to explore relationships between entities or variables represented as nodes and their connections represented as edges. This approach is particularly useful for analyzing complex relationships or dependencies in large datasets.
    These techniques help data scientists gain insights into the relationships between variables, identify dependencies, and understand the structure of the data. By exploring data relationships, data scientists can make informed decisions during the subsequent stages of the data science methodology, such as data preparation, feature engineering, and modeling.



    DATA UNDERSTANDING Okiliong How could you use descriptive statistics to assess the quality of the data set? Descriptive statistics can be used to assess the quality of a dataset in several ways. Here are some examples: 1.Identifying Missing Values: Descriptive statistics, such as the count of non-missing values, can help identify variables or columns with missing data. If a variable has a significantly lower count compared to the total number of observations, it indicates missing values that need to be addressed. 2.Checking Data Range: Descriptive statistics like minimum and maximum values can provide insights into the range of values for each variable. Unusually large or small values that fall outside the expected range may indicate data entry errors or outliers that need to be investigated. 3.Assessing Data Distribution: Descriptive statistics such as mean, median, and standard deviation can provide information about the distribution of the data. If the mean and median are significantly different, it suggests the presence of outliers or skewness in the data. 4.Identifying Data Errors: Descriptive statistics can help identify potential data errors or inconsistencies. For example, if a variable represents age and has a minimum value of -10 or a maximum value of 200, it indicates data entry errors or invalid values that need to be corrected. 5.Comparing Variables: Descriptive statistics can be used to compare variables and identify potential issues. For example, if two variables are expected to be highly correlated but have very different means or ranges, it may indicate a data quality issue or a problem with data collection. 6.Assessing Data Completeness: Descriptive statistics can provide insights into the completeness of the data. For example, if a variable has a high count of missing values or a significantly lower count compared to other variables, it suggests data incompleteness that needs to be addressed. By using descriptive statistics, data scientists can gain a better understanding of the quality and characteristics of the dataset. This information helps in identifying data quality issues, outliers, and inconsistencies that need to be addressed before proceeding with further analysis and modeling. Okiliong What are some techniques or methods used to explore data relationships during the data understanding stage? During the data understanding stage, several techniques and methods can be used to explore data relationships. Here are some commonly used techniques: 1.Correlation Analysis: Correlation analysis measures the strength and direction of the linear relationship between two variables. It helps identify variables that are closely related and can provide insights into dependencies or associations between variables. 2.Scatter Plots: Scatter plots are graphical representations that display the relationship between two variables. They can help visualize patterns, trends, or clusters in the data and identify potential relationships or dependencies. 3.Cross-Tabulation: Cross-tabulation, also known as a contingency table, is used to analyze the relationship between two categorical variables. It provides a tabular summary of the joint distribution of the variables, allowing for the identification of associations or dependencies. 4.Heatmaps: Heatmaps are visual representations of a matrix of values, where colors are used to represent the magnitude of the values. Heatmaps can be used to visualize the relationships between multiple variables simultaneously, making it easier to identify patterns or clusters. 5.Dimensionality Reduction Techniques: Dimensionality reduction techniques, such as Principal Component Analysis (PCA) or t-SNE, can be used to explore relationships between variables in high-dimensional datasets. These techniques help visualize the data in lower-dimensional spaces while preserving the most important relationships between variables. 6.Correlation Matrix: A correlation matrix is a tabular representation that displays the pairwise correlations between multiple variables. It provides a comprehensive overview of the relationships between variables and can help identify highly correlated variables or potential multicollinearity issues. 7.Network Analysis: Network analysis techniques can be used to explore relationships between entities or variables represented as nodes and their connections represented as edges. This approach is particularly useful for analyzing complex relationships or dependencies in large datasets. These techniques help data scientists gain insights into the relationships between variables, identify dependencies, and understand the structure of the data. By exploring data relationships, data scientists can make informed decisions during the subsequent stages of the data science methodology, such as data preparation, feature engineering, and modeling.
    Like
    2
    0 Kommentare 0 Anteile 467 Ansichten 0 Vorschau

  • Data Understanding.

    Okiliong
    Can you explain the purpose of the data understanding stage in the data science methodology?


    Certainly! The data understanding stage is a crucial step in the data science methodology.
    Its purpose is to gain a deep understanding of the data that has been collected and determine if it is representative of the problem to be solved. Here are the key objectives of the data understanding stage:

    1.Assess Data Quality:
    The data understanding stage involves evaluating the quality of the collected data.
    This includes checking for missing values, outliers, inconsistencies, and data errors.
    By identifying and addressing data quality issues, we can ensure the reliability and accuracy of the subsequent analysis.

    2.Explore Data Relationships: During this stage, we examine the relationships between different variables in the dataset.
    This includes calculating descriptive statistics, such as mean, median, minimum, maximum, and standard deviation, to understand the distribution of the data. Pairwise correlations are also analyzed to identify variables that are closely related or redundant.

    3.Understand Data Distributions:
    Histograms and other visualization techniques are used to understand the distribution of variables. This helps in identifying patterns, outliers, and potential data transformations that may be required for modeling.

    4.Determine Data Relevance: The data understanding stage helps us determine if the collected data is relevant to the problem at hand.
    It allows us to assess if the data includes all the necessary variables and if any additional data needs to be collected to address potential gaps.

    5.Refine Problem Definition: Through the data understanding stage, we may discover insights or patterns that challenge or refine our initial problem definition. This iterative process helps us gain a deeper understanding of the problem and adjust our approach accordingly.
    By thoroughly understanding the data, we can make informed decisions about data preprocessing, feature engineering, and modeling techniques. This stage sets the foundation for the subsequent stages of the data science methodology, such as data preparation, modeling, and evaluation.

    Okiliong
    How could you use descriptive statistics to assess the quality of the data set?


    Descriptive statistics can be used to assess the quality of a dataset in several ways.
    Here are some examples:

    1.Identifying Missing Values: Descriptive statistics, such as the count of non-missing values, can help identify variables or columns with missing data.
    If a variable has a significantly lower count compared to the total number of observations, it indicates missing values that need to be addressed.

    2.Checking Data Range: Descriptive statistics like minimum and maximum values can provide insights into the range of values for each variable. Unusually large or small values that fall outside the expected range may indicate data entry errors or outliers that need to be investigated.

    3.Assessing Data Distribution: Descriptive statistics such as mean, median, and standard deviation can provide information about the distribution of the data. If the mean and median are significantly different, it suggests the presence of outliers or skewness in the data.

    4.Identifying Data Errors: Descriptive statistics can help identify potential data errors or inconsistencies. For example, if a variable represents age and has a minimum value of -10 or a maximum value of 200, it indicates data entry errors or invalid values that need to be corrected.

    5.Comparing Variables: Descriptive statistics can be used to compare variables and identify potential issues. For example, if two variables are expected to be highly correlated but have very different means or ranges, it may indicate a data quality issue or a problem with data collection.

    6.Assessing Data Completeness: Descriptive statistics can provide insights into the completeness of the data. For example, if a variable has a high count of missing values or a significantly lower count compared to other variables, it suggests data incompleteness that needs to be addressed.
    By using descriptive statistics, data scientists can gain a better understanding of the quality and characteristics of the dataset. This information helps in identifying data quality issues, outliers, and inconsistencies that need to be addressed before proceeding with further analysis and modeling.

    Thanks,

    Data Understanding. Okiliong Can you explain the purpose of the data understanding stage in the data science methodology? Certainly! The data understanding stage is a crucial step in the data science methodology. Its purpose is to gain a deep understanding of the data that has been collected and determine if it is representative of the problem to be solved. Here are the key objectives of the data understanding stage: 1.Assess Data Quality: The data understanding stage involves evaluating the quality of the collected data. This includes checking for missing values, outliers, inconsistencies, and data errors. By identifying and addressing data quality issues, we can ensure the reliability and accuracy of the subsequent analysis. 2.Explore Data Relationships: During this stage, we examine the relationships between different variables in the dataset. This includes calculating descriptive statistics, such as mean, median, minimum, maximum, and standard deviation, to understand the distribution of the data. Pairwise correlations are also analyzed to identify variables that are closely related or redundant. 3.Understand Data Distributions: Histograms and other visualization techniques are used to understand the distribution of variables. This helps in identifying patterns, outliers, and potential data transformations that may be required for modeling. 4.Determine Data Relevance: The data understanding stage helps us determine if the collected data is relevant to the problem at hand. It allows us to assess if the data includes all the necessary variables and if any additional data needs to be collected to address potential gaps. 5.Refine Problem Definition: Through the data understanding stage, we may discover insights or patterns that challenge or refine our initial problem definition. This iterative process helps us gain a deeper understanding of the problem and adjust our approach accordingly. By thoroughly understanding the data, we can make informed decisions about data preprocessing, feature engineering, and modeling techniques. This stage sets the foundation for the subsequent stages of the data science methodology, such as data preparation, modeling, and evaluation. Okiliong How could you use descriptive statistics to assess the quality of the data set? Descriptive statistics can be used to assess the quality of a dataset in several ways. Here are some examples: 1.Identifying Missing Values: Descriptive statistics, such as the count of non-missing values, can help identify variables or columns with missing data. If a variable has a significantly lower count compared to the total number of observations, it indicates missing values that need to be addressed. 2.Checking Data Range: Descriptive statistics like minimum and maximum values can provide insights into the range of values for each variable. Unusually large or small values that fall outside the expected range may indicate data entry errors or outliers that need to be investigated. 3.Assessing Data Distribution: Descriptive statistics such as mean, median, and standard deviation can provide information about the distribution of the data. If the mean and median are significantly different, it suggests the presence of outliers or skewness in the data. 4.Identifying Data Errors: Descriptive statistics can help identify potential data errors or inconsistencies. For example, if a variable represents age and has a minimum value of -10 or a maximum value of 200, it indicates data entry errors or invalid values that need to be corrected. 5.Comparing Variables: Descriptive statistics can be used to compare variables and identify potential issues. For example, if two variables are expected to be highly correlated but have very different means or ranges, it may indicate a data quality issue or a problem with data collection. 6.Assessing Data Completeness: Descriptive statistics can provide insights into the completeness of the data. For example, if a variable has a high count of missing values or a significantly lower count compared to other variables, it suggests data incompleteness that needs to be addressed. By using descriptive statistics, data scientists can gain a better understanding of the quality and characteristics of the dataset. This information helps in identifying data quality issues, outliers, and inconsistencies that need to be addressed before proceeding with further analysis and modeling. Thanks,
    0 Kommentare 0 Anteile 477 Ansichten 0 Vorschau



  • Data Understanding


    Data Science Methodology

    Essentially, the data understanding section of the data science methodology answers the question:
    Is the data that you collected representative of the problem to be solved?
    . In order to understand the data related to congestive heart failure admissions,
    descriptive statistics needed to be run against the data columns that would become variables in the model.
    1. First, these statistics included hurst, univariates, and statistics on each variable, such as mean, median, minimum, maximum, and standard deviation.
    2. Second, pairwise correlations were used, to see how closely certain variables were related, and which ones, if any, were very highly correlated, meaning that they would be essentially redundant, thus making only one relevant for modeling.
    3. Third, histograms of the variables were examined to understand their distributions.

    Histograms are a good way to understand how values or a variable are distributed, and which sorts of data preparation may be needed to make the variable more useful in a model.
    For example, for a categorical variable that has too many distinct values to be informative in a model, the histogram would help them decide how to consolidate those values.
    The univariates, statistics, and histograms are also used to assess data quality. From the information provided, certain values can be re-coded or perhaps even dropped if necessary, such as when a certain variable has missing values.
    The question then becomes, does "missing" mean anything?
    Sometimes a missing value might mean "no", or "0" (zero), or at other times it simply means "we don't know" or, if a variable contains invalid or misleading values, such as a numeric variable called "age" that contains 0 to 100 and also 999, where that "triple-9" actually means "missing",
    but would be treated as a valid value unless we corrected it.
    Initially, the meaning of congestive heart failure admission was decided on the basis of a primary diagnosis of congestive heart failure.
    But working through the data understanding stage revealed that the initial definition was not capturing all of the congestive heart failure admissions that were expected, based on clinical experience.
    This meant looping back to the data collection stage and adding secondary and tertiary diagnoses, and building a more comprehensive definition of congestive heart failure admission.
    This is just one example of the interactive processes in the methodology.
    The more one works with the problem and the data, the more one learns and therefore the more refinement that can be done within the model, ultimately leading to a better solution to the problem.
    This ends the Data Understanding section of this course.




    Okiliong
    What are some potential challenges or issues that can arise during the data understanding stage?


    During the data understanding stage, there are several potential challenges or issues that can arise. Some of them include:

    1.Insufficient or incomplete data: The data collected may not be sufficient or may have missing values, which can affect the accuracy and reliability of the analysis.

    2.Data quality issues: The data may contain errors, outliers, or inconsistencies that need to be addressed before proceeding with the analysis.

    3.Data bias: The data may be biased towards certain groups or may not be representative of the population, leading to biased results and conclusions.

    4.Data privacy and security: Handling sensitive or confidential data requires ensuring proper security measures are in place to protect the data and comply with privacy regulations.

    5.Data compatibility: Data from different sources may have different formats, structures, or units, making it challenging to integrate and analyze them together.

    6.Data interpretation: Understanding the meaning and context of the data can be complex, especially when dealing with complex variables or domain-specific terminology.

    7.Data volume and scalability: Large datasets can pose challenges in terms of storage, processing power, and computational resources required for analysis.

    8.Stakeholder involvement: Involving stakeholders and subject matter experts in the data understanding stage can be challenging, especially if there are communication gaps or conflicting priorities.

    It's important to address these challenges and issues during the data understanding stage to ensure the accuracy and reliability of the subsequent analysis and modeling steps.


    Data Understanding Data Science Methodology Essentially, the data understanding section of the data science methodology answers the question: Is the data that you collected representative of the problem to be solved? . In order to understand the data related to congestive heart failure admissions, descriptive statistics needed to be run against the data columns that would become variables in the model. 1. First, these statistics included hurst, univariates, and statistics on each variable, such as mean, median, minimum, maximum, and standard deviation. 2. Second, pairwise correlations were used, to see how closely certain variables were related, and which ones, if any, were very highly correlated, meaning that they would be essentially redundant, thus making only one relevant for modeling. 3. Third, histograms of the variables were examined to understand their distributions. Histograms are a good way to understand how values or a variable are distributed, and which sorts of data preparation may be needed to make the variable more useful in a model. For example, for a categorical variable that has too many distinct values to be informative in a model, the histogram would help them decide how to consolidate those values. The univariates, statistics, and histograms are also used to assess data quality. From the information provided, certain values can be re-coded or perhaps even dropped if necessary, such as when a certain variable has missing values. The question then becomes, does "missing" mean anything? Sometimes a missing value might mean "no", or "0" (zero), or at other times it simply means "we don't know" or, if a variable contains invalid or misleading values, such as a numeric variable called "age" that contains 0 to 100 and also 999, where that "triple-9" actually means "missing", but would be treated as a valid value unless we corrected it. Initially, the meaning of congestive heart failure admission was decided on the basis of a primary diagnosis of congestive heart failure. But working through the data understanding stage revealed that the initial definition was not capturing all of the congestive heart failure admissions that were expected, based on clinical experience. This meant looping back to the data collection stage and adding secondary and tertiary diagnoses, and building a more comprehensive definition of congestive heart failure admission. This is just one example of the interactive processes in the methodology. The more one works with the problem and the data, the more one learns and therefore the more refinement that can be done within the model, ultimately leading to a better solution to the problem. This ends the Data Understanding section of this course. ​ Okiliong What are some potential challenges or issues that can arise during the data understanding stage? During the data understanding stage, there are several potential challenges or issues that can arise. Some of them include: 1.Insufficient or incomplete data: The data collected may not be sufficient or may have missing values, which can affect the accuracy and reliability of the analysis. 2.Data quality issues: The data may contain errors, outliers, or inconsistencies that need to be addressed before proceeding with the analysis. 3.Data bias: The data may be biased towards certain groups or may not be representative of the population, leading to biased results and conclusions. 4.Data privacy and security: Handling sensitive or confidential data requires ensuring proper security measures are in place to protect the data and comply with privacy regulations. 5.Data compatibility: Data from different sources may have different formats, structures, or units, making it challenging to integrate and analyze them together. 6.Data interpretation: Understanding the meaning and context of the data can be complex, especially when dealing with complex variables or domain-specific terminology. 7.Data volume and scalability: Large datasets can pose challenges in terms of storage, processing power, and computational resources required for analysis. 8.Stakeholder involvement: Involving stakeholders and subject matter experts in the data understanding stage can be challenging, especially if there are communication gaps or conflicting priorities. It's important to address these challenges and issues during the data understanding stage to ensure the accuracy and reliability of the subsequent analysis and modeling steps.
    Love
    1
    3 Kommentare 0 Anteile 755 Ansichten 0 Vorschau
  • KEYBOARD KEYS
    1. Alphanumeric Keys
    Function: These are the primary keys on a keyboard, including letters, numbers, and punctuation marks. They are used for general typing and data entry.
    2. Function Keys (F1 to F12)
    Function: These keys perform special functions depending on the operating system or software being used. For example, F1 is often used to open the help menu, while F5 might refresh a webpage.
    3. Escape Key (Esc)
    Function: The Escape key is typically used to cancel operations, close menus, or exit full-screen mode in various applications.
    4. Tab Key
    Function: This key is used to move the cursor to the next tab stop, often used for indentation in documents or navigating through form fields.
    5. Caps Lock Key
    Function: When activated, this key makes all letters typed in uppercase. Pressing it again returns to lowercase.
    6. Shift Key
    Function: The Shift key is used in combination with other keys to type capital letters or access the secondary functions of keys (e.g., typing symbols like @ or #).
    7. Control Key (Ctrl)
    Function: Often used in combination with other keys to perform shortcuts, such as Ctrl+C for copying and Ctrl+V for pasting.
    8. Alt Key
    Function: Similar to the Control key, the Alt key is used in keyboard shortcuts and to access alternative functions of other keys.
    9. Spacebar
    Function: The longest key on the keyboard, it is used to insert a space between words during typing.
    KEYBOARD KEYS 1. Alphanumeric Keys Function: These are the primary keys on a keyboard, including letters, numbers, and punctuation marks. They are used for general typing and data entry. 2. Function Keys (F1 to F12) Function: These keys perform special functions depending on the operating system or software being used. For example, F1 is often used to open the help menu, while F5 might refresh a webpage. 3. Escape Key (Esc) Function: The Escape key is typically used to cancel operations, close menus, or exit full-screen mode in various applications. 4. Tab Key Function: This key is used to move the cursor to the next tab stop, often used for indentation in documents or navigating through form fields. 5. Caps Lock Key Function: When activated, this key makes all letters typed in uppercase. Pressing it again returns to lowercase. 6. Shift Key Function: The Shift key is used in combination with other keys to type capital letters or access the secondary functions of keys (e.g., typing symbols like @ or #). 7. Control Key (Ctrl) Function: Often used in combination with other keys to perform shortcuts, such as Ctrl+C for copying and Ctrl+V for pasting. 8. Alt Key Function: Similar to the Control key, the Alt key is used in keyboard shortcuts and to access alternative functions of other keys. 9. Spacebar Function: The longest key on the keyboard, it is used to insert a space between words during typing.
    0 Kommentare 0 Anteile 391 Ansichten 0 Vorschau
  • MICROSOFT SURVEY AND DATA ENTRY
    MICROSOFT SURVEY AND DATA ENTRY
    0 Kommentare 0 Anteile 404 Ansichten 0 Vorschau
  • How to protect Data in Microsoft Excel using the inbuilt tools https://youtu.be/MywITYPUL44?list=PLGyQ_RMbdfNMGc8Sqju3DSsF4LH2yyTEv
    How to protect Data in Microsoft Excel using the inbuilt tools https://youtu.be/MywITYPUL44?list=PLGyQ_RMbdfNMGc8Sqju3DSsF4LH2yyTEv
    0 Kommentare 0 Anteile 395 Ansichten 0 Vorschau
  • Search for data within a workbook
    Search for data within a workbook
    0 Kommentare 0 Anteile 363 Ansichten 0 Vorschau
  • Database lesson 7
    Database lesson 7
    Like
    1
    0 Kommentare 0 Anteile 321 Ansichten 0 Vorschau
  • Database lesson 6
    Database lesson 6
    Like
    1
    0 Kommentare 0 Anteile 320 Ansichten 0 Vorschau
  • Database lesson 4
    Database lesson 4
    Like
    1
    0 Kommentare 0 Anteile 303 Ansichten 0 Vorschau
  • Database lesson 3
    Database lesson 3
    Like
    1
    0 Kommentare 0 Anteile 302 Ansichten 0 Vorschau
  • Database lesson 2
    Database lesson 2
    Like
    1
    0 Kommentare 0 Anteile 270 Ansichten 0 Vorschau
  • Database lesson 1
    Database lesson 1
    Like
    1
    0 Kommentare 0 Anteile 286 Ansichten 0 Vorschau
  • INTRODUCTION TO CORRELATION INTERPTRETATION OF DATA
    INTRODUCTION TO CORRELATION INTERPTRETATION OF DATA
    0 Kommentare 0 Anteile 415 Ansichten 0 Vorschau
  • Watch my latest video on how to use the IF function in Excel! Learn to simplify your data analysis with this powerful tool. Check it out now! #ExcelTips #DataAnalysis #IFFunction #LearnExcel #SpreadsheetSkills #ExcelTutorial #TechSkills #ProductivityBoost
    Watch my latest video on how to use the IF function in Excel! 🚀 Learn to simplify your data analysis with this powerful tool. Check it out now! #ExcelTips #DataAnalysis #IFFunction #LearnExcel #SpreadsheetSkills #ExcelTutorial #TechSkills #ProductivityBoost
    0 Kommentare 0 Anteile 1010 Ansichten 189 0 Vorschau