Tuesday, June 4, 2019

Data Pre-processing Tool

in initialiseion Pre- dishing ToolChapter- 2 original life info r atomic issue forth 18ly comply with the necessities of miscellaneous info dig bastards. It is usu in all(prenominal)y inconsistent and noisy. It whitethorn contain redundant attri hardlyes, unsuitable formats etc. and so selective information has to be prep atomic f ar 18d vigilantly before the entropy tap unfeignedly starts. It is salutary cognize fact that success of a entropy minelaying algorithm is really much dependent on the calibre of information humping. selective information croping is one of the approximately important taxs in information minelaying. In this con school admit it is natural that entropy pre-processing is a conglomerate task involving large entropy denounces. Some age selective information pre-processing take to a greater extent than 50% of the total time spent in solving the selective information mining problem. It is critical for selective information miners to choose efficient info preprocessing proficiency for proper(postnominal) information set which tail assembly not only save processing time but to a fault retain the quality of the entropy for selective information mining process.A information pre-processing tool should avail miners with many information mining activates. For poser, selective information whitethorn be provided in variant formats as discussed in previous chapter (flat files, database files etc). information files whitethorn in addition have contrastive formats of comfort, calculation of derived proportions, data filters, joined data sets etc. entropy mining process universally starts with understanding of data. In this stage pre-processing tools whitethorn help with data exploration and data dis continuey tasks. information processing includes lots of tedious works, data pre-processing generally consists ofData killingData desegregationData revolution AndData Reduction.In this chapt er we will determine all these data pre-processing activities.2.1 Data UnderstandingIn Data understanding signifier the early task is to collect initial data and then fall out with activities in browse to gain well known with data, to lay hold of (on data quality problems, to discover first insight into the data or to identify evoke subset to form hypothesis for hidden information. The data understanding phase according to CRISP molding thunder mug be shown in next .2.1.1 Collect initial DataThe initial collection of data includes tearing of data if traind for data understanding. For antecedent, if specific tool is utilise for data understanding, it makes great sense to laden your data into this tool. This attempt possibly manoeuvres to initial data preparation ill- occasions. However if data is obtained from ten-fold data sources then integration is an additional issue.2.1.2 Describe dataHere the gross or surface properties of the gathitherd data atomic look 18 examined.2.1.3 Explore dataThis task is required to distribute the data mining questions, which whitethorn be addressed victimization querying, visualization and reporting. These includeSharing of key allots, for instance the death pass judgment of a prophesyion taskRelations amid pairs or modest numbers of specifysResults of simple aggregationsProperties of important sub- worldsSimple statistical analyses.2.1.4 Verify data qualityIn this tempo quality of data is examined. It answers questions much(prenominal) asIs the data complete (does it cover all the cases required)?Is it accurate or does it contains computer errors and if in that location be errors how common argon they?Are at that place absent determine in the data?If so how atomic number 18 they represented, where do they breathe and how common are they?2.2 Data PreprocessingData preprocessing phase focus on the pre-processing steps that convey the data to be mined. Data preparation or preprocessing is one most important step in data mining. Industrial practice indicates that one data is well prepared the mined expirations are much more accurate. This actor this step is similarly a very critical fro success of data mining method. Among separates, data preparation principal(prenominal)ly involves data cleanup spot, data integration, data variety, and reduction.2.2.1 Data cleanData change is also known as data purifying or scrub storeg. It deals with detecting and removing inconsistencies and errors from data in order to get better quality data. age victimisation a single data source such(prenominal) as flat files or databases data quality problems arises delinquent to misspellings while data foundation, deficient information or other invalid data. While the data is taken from the integration of quaternary data sources such as data warehouses, federated database systems or world(a) web-based information systems, the requirement for data alter increases signifi outh ousetly. This is because the triune sources may contain redundant data in different formats. Consolidation of different data formats abs elimination of redundant information becomes necessary in order to provide access to accurate and consistent data. Good quality data requires passing a set of quality criteria. Those criteria includeAccuracy Accuracy is an amount encourage over the criteria of integrity, consistency and density.Integrity Integrity is an aggregated cling to over the criteria of completeness and validity.Completeness completeness is achieved by correcting data containing anomalies.Validity Validity is approximated by the amount of data satisfying integrity constraints.Consistency consistency concerns contradictions and syntactical anomalies in data.Uniformity it is directly cogitate to irregularities in data.Density The density is the quotient of missing determine in the data and the number of total value ought to be known.Uniqueness uniqueness is related to th e number of duplicates present in the data.2.2.1.1 Terms Related to Data CleaningData cleaning data cleaning is the process of detecting, diagnosing, and edit damaged data.Data editing data editing esteems changing the value of data which are incorrect.Data flow data flow is defined as passing of record information by dint of succeeding information carriers.Inliers Inliers are data determine falling inside the projected range.Outlier outliers are data value falling removed the projected range.Robust estimation evaluation of statistical parameters, development methods that are less responsive to the effect of outliers than more conventional methods are called robust method.2.2.1.2 translation Data CleaningData cleaning is a process apply to identify imprecise, incomplete, or irrational data and then improving the quality through correction of detected errors and omissions. This process may includeformat checksCompleteness checksReasonableness checksLimit checksReview of the d ata to identify outliers or other errorsAssessment of data by subject area experts (e.g. taxonomic specialists).By this process suspected records are flagged, documented and checked subsequently. And finally these suspected records rout out be corrected. some generation validation checks also involve checking for compliance against applicable standards, rules, and conventions.The general framework for data cleaning knock overn asDefine and determine error types try and identify error instancesCorrect the errorsDocument error instances and error types andModify data entry procedures to reduce future errors.Data cleaning process is referred by different people by a number of terms. It is a matter of preference what one uses. These terms include Error Checking, Error Detection, Data Validation, Data Cleaning, Data Cleansing, Data Scrub salt awayg and Error Correction.We use Data Cleaning to encompass three sub-processes, viz.Data checking and error detectionData validation andError correction.A fourth improvement of the error streak processes could perhaps be added.2.2.1.3 Problems with DataHere we just note some key problems with data lose data This problem evanesce because of both principal(prenominal) reasonsData are absent in source where it is pass judgment to be present.Some times data is present are not procurable in appropriately formDetecting missing data is usually straightforward and simpler.Erroneous data This problem occurs when a wrong value is recorded for a real world value. Detection of erroneous data merchantman be quite difficult. (For instance the incorrect spelling of a name) Duplicated data This problem occur because of deuce reasonsRepeated entry of similar real world entity with some different setSome times a real world entity may have different appellatives.Repeat records are regular and frequently easy to detect. The different identification of the same real world entities can be a very hard problem to identify and solv e.Heterogeneities When data from different sources are brought together in one analysis problem heterogeneity may occur. heterogeneity could beStructural heterogeneity arises when the data structures reflect different business usagesemantic heterogeneity arises when the meaning of data is different n each system that is being com stash awayedHeterogeneities are usually very difficult to resolve since because they usually involve a lot of contextual data that is not well defined as metadata.Information dependencies in the relationship among the different sets of attribute are commonly present. Wrong cleaning mechanisms can hike up damage the information in the data. Various analysis tools handle these problems in different expressions. commercialised offerings are available that assist the cleaning process, but these are often problem specific. Uncertainty in information systems is a well-recognized hard problem. In following a very simple illustrations of missing and erroneous data is shownExtensive support for data cleaning must be provided by data warehouses. Data warehouses have high probability of dirty data since they load and continuously refresh huge amounts of data from a variety of sources. Since these data warehouses are used for strategic decision reservation therefore the correctness of their data is important to avoid wrong decisions. The ETL (Extraction, Transformation, and Loading) process for building a data warehouse is illustrated in following .Data transformations are related with schema or data translation and integration, and with filtering and aggregating data to be stored in the data warehouse. All data cleaning is classically performed in a separate data accomplishment area prior to loading the transformed data into the warehouse. A large number of tools of varying functionality are available to support these tasks, but often a significant portion of the cleaning and transformation work has to be done manually or by low-level pro grams that are difficult to write and maintain.A data cleaning method should assure followingIt should identify and eliminate all major errors and inconsistencies in an individual data sources and also when integrating multiple sources.Data cleaning should be supported by tools to bound manual examination and computer programming effort and it should be extensible so that can cover additional sources.It should be performed in association with schema related data transformations based on metadata.Data cleaning mapping functions should be specified in a declarative way and be reusable for other data sources.2.2.1.4 Data Cleaning Phases1. Analysis To identify errors and inconsistencies in the database there is a demand of detailed analysis, which involves both manual inspection and automated analysis programs. This reveals where (most of) the problems are present.2. Defining Transformation and Mapping Rules After discovering the problems, this phase are related with delimit the man ner by which we are going to automate the solutions to clean the data. We will father various problems that translate to a list of activities as a result of analysis phase.Example Remove all entries for J. Smith because they are duplicates of John Smith ascend entries with bule in colour field and change these to blue. Find all records where the Phone number field does not match the pattern (NNNNN NNNNNN). Further steps for cleaning this data are then applied. Etc 3. Verification In this phase we check and assess the transformation plans made in phase- 2. Without this step, we may end up making the data dirtier rather than cleaner. Since data transformation is the main step that actually changes the data itself so there is a need to be sure that the applied transformations will do it correctly. Therefore test and examine the transformation plans very carefully.Example let we have a very thick C++ book where it says strict in all the places where it should say struct4. Transformat ion at present if it is sure that cleaning will be done correctly, then apply the transformation corroborate in last step. For large database, this task is supported by a variety of toolsBackflow of Cleaned Data In a data mining the main objective is to convert and move clean data into channelise system. This asks for a requirement to purify legacy data. Cleansing can be a complicated process depending on the technique chosen and has to be intentional carefully to achieve the objective of remotion of dirty data. Some methods to accomplish the task of data cleanse of legacy system includen automatise data cleansingn Manual data cleansingn The combined cleansing process2.2.1.5 Missing ValuesData cleaning addresses a variety of data quality problems, including noise and outliers, inconsistent data, duplicate data, and missing determine. Missing values is one important problem to be addressed. Missing value problem occurs because many tuples may have no record for several attribu tes. For Example there is a guest gross revenue database consisting of a whole batch of records (lets say around 100,000) where some of the records have certain fields missing. Lets say customer income in sales data may be missing. Goal here is to find a way to predict what the missing data values should be (so that these can be encountered) based on the existing data. Missing data may be due to following reasonsEquipment malfunctionInconsistent with other recorded data and thus deletedData not entered due to misunderstanding trusted data may not be considered important at the time of entrynot register history or changes of the dataHow to Handle Missing Values?Dealing with missing values is a regular question that has to do with the actual meaning of the data. There are various methods for handling missing entries1. Ignore the data row. One solution of missing values is to just ignore the full(a) data row. This is generally done when the class label is not there (here we are ass uming that the data mining finis is classification), or many attributes are missing from the row (not just one). But if the percentage of such rows is high we will definitely get a poor performance.2. Use a global constant to fill in for missing values. We can fill in a global constant for missing values such as unknown, N/A or minus infinity. This is done because at times is just doesnt make sense to try and predict the missing value. For example if in customer sales database if, say, office address is missing for some, filling it in doesnt make much sense. This method is simple but is not full proof.3. Use attribute mean. Let say if the average income of a a family is X you can use that value to replace missing income values in the customer sales database.4. Use attribute mean for all samples belonging to the same class. Lets say you have a cars pricing DB that, among other things, classifies cars to Luxury and Low cypher and youre dealing with missing values in the cost field. replace missing cost of a luxuriousness car with the average cost of all luxury cars is probably more accurate then the value youd get if you factor in the low budget5. Use data mining algorithm to predict the value. The value can be determined using regression, inference based tools using Bayesian formalism, decision trees, lumping algorithms etc.2.2.1.6 Noisy Data hindrance can be defined as a stochastic error or variance in a measured variable. Due to randomness it is very difficult to follow a strategy for noise removal from the data. Real world data is not always faultless. It can suffer from corruption which may impact the interpretations of the data, models created from the data, and decisions made based on the data. anomalous attribute values could be present because of following reasonsFaulty data collection instrumentsData entry problemsDuplicate recordsIncomplete dataInconsistent dataIncorrect processingData transmission problemsTechnology limitation.Inconsistency in naming conventionOutliersHow to handle Noisy Data?The methods for removing noise from data are as follows.1. stack awayning this approach first sort data and partition it into (equal- frequence) bins then one can serene it using- salt away means, smooth using bin median, smooth using bin boundaries, etc.2. Regression in this method smoothing is done by fitting the data into regression functions.3. Clustering clustering detect and remove outliers from the data.4. combine computer and clement inspection in this approach computer detects suspicious values which are then checked by human experts (e.g., this approach deal with possible outliers)..Following methods are explained in detail as followsBinning Data preparation bodily function that converts continuous data to discrete data by alternate a value from a continuous range with a bin identifier, where each bin represents a range of values. For instance, age can be changed to bins such as 20 or under, 21-40, 41-65 and over 65 . Binning methods smooth a sorted data set by consulting values around it. This is therefore called local smoothing. Let consider a binning exampleBinning Methodsn Equal-width (distance) partitioningDivides the range into N intervals of equal coat uniform grid if A and B are the last(a) and highest values of the attribute, the width of intervals will be W = (B-A)/N.The most straightforward, but outliers may overshadow presentation Skewed data is not handled welln Equal-depth (frequency) partitioning1. It divides the range (values of a given attribute) into N intervals, each containing approximately same number of samples (elements)2. Good data scaling3. Managing categorical attributes can be tricky.n Smooth by bin means- severally bin value is replaced by the mean of valuesn Smooth by bin medians- Each bin value is replaced by the median of valuesn Smooth by bin boundaries Each bin value is replaced by the closest barrier valueExampleLet Sorted data for price (in dollars) 4, 8 , 9, 15, 21, 21, 24, 25, 26, 28, 29, 34n splitter into equal-frequency (equi-depth) binso Bin 1 4, 8, 9, 15o Bin 2 21, 21, 24, 25o Bin 3 26, 28, 29, 34n Smoothing by bin meanso Bin 1 9, 9, 9, 9 ( for example mean of 4, 8, 9, 15 is 9)o Bin 2 23, 23, 23, 23o Bin 3 29, 29, 29, 29n Smoothing by bin boundarieso Bin 1 4, 4, 4, 15o Bin 2 21, 21, 25, 25o Bin 3 26, 26, 26, 34Regression Regression is a DM technique used to fit an equation to a dataset. The simplest form of regression is linear regression which uses the mandate of a straight line (y = b+ wx) and determines the suitable values for b and w to predict the value of y based upon a given value of x. Sophisticated techniques, such as multiple regression, permit the use of more than one input variable and allow for the fitting of more complex models, such as a quadratic equation. Regression is further describe in subsequent chapter while discussing predictions. Clustering clustering is a method of pigeonholinging data into differen t pigeonholings , so that data in each group share similar trends and patterns. Clustering constitute a major class of data mining algorithms. These algorithms automatically partitions the data space into set of regions or cluster. The goal of the process is to find all set of similar examples in data, in some optimum fashion. Following shows three clusters. Values that fall outside the cluster are outliers.4. Combined computer and human inspection These methods find the suspicious values using the computer programs and then they are verified by human experts. By this process all outliers are checked.2.2.1.7 Data cleaning as a processData cleaning is the process of Detecting, Diagnosing, and Editing Data. Data cleaning is a three stage method involving repeated cycle of screening, diagnosing, and editing of suspected data abnormalities. Many data errors are detected by the way during study activities. However, it is more efficient to discover inconsistencies by actively searching f or them in a planned manner. It is not always right away out-of-doors whether a data back breaker is erroneous. Many times it requires careful examination. Likewise, missing values require additional check. Therefore, predefined rules for dealing with errors and true missing and extreme values are part of good practice. One can monitor for suspect features in abide by questionnaires, databases, or analysis data. In small studies, with the examiner closely involved at all stages, there may be small or no difference between a database and an analysis dataset.During as well as after treatment, the diagnostic and treatment phases of cleaning need insight into the sources and types of errors at all stages of the study. Data flow concept is therefore crucial in this respect. After quantity the research data go through repeated steps of- enter into information carriers, extracted, and transferred to other carriers, edited, selected, transformed, summarized, and presented. It is inhe rent to understand that errors can occur at any stage of the data flow, including during data cleaning itself. Most of these problems are due to human error.Inaccuracy of a single data point and measurement may be tolerable, and associated to the inherent technological error of the measurement device. Therefore the process of data clenaning mus focus on those errors that are beyond small technical variations and that form a major shift within or beyond the population distribution. In turn, it must be based on understanding of technical errors and expected ranges of normal values.Some errors are worthy of higher priority, but which ones are most significant is highly study-specific. For instance in most medical epidemiological studies, errors that need to be cleaned, at all costs, include missing sex activity, gender misspecification, birth date or examination date errors, duplications or merging of records, and biologically impossible results. some other example is in nutrition s tudies, date errors lead to age errors, which in turn lead to errors in weight-for-age tally and, further, to misclassification of subjects as under- or overweight. Errors of sex and date are particularly important because they contaminate derived variables. Prioritization is essential if the study is under time pressures or if resources for data cleaning are limited.2.2.2 Data IntegrationThis is a process of taking data from one or more sources and mapping it, field by field, onto a in the altogether data structure. Idea is to combine data from multiple sources into a coherent form. Various data mining projects requires data from multiple sources becausen Data may be distributed over different databases or data warehouses. (for example an epidemiological study that demand information about hospital admissions and car accidents)n sometimes data may be required from different geographic distributions, or there may be need for historical data. (e.g. integrate historical data into a new data warehouse)n There may be a urgency of enhancement of data with additional (external) data. (for improving data mining precision)2.2.2.1 Data Integration IssuesThere are number of issues in data integrations. Consider two database tables. Imagine two database tablesDatabase Table-1Database Table-2In integration of there two tables there are variety of issues involved such as1. The same attribute may have different names (for example in in a higher place tables Name and Given Name are same attributes with different names)2. An attribute may be derived from another (for example attribute Age is derived from attribute DOB)3. Attributes ability be redundant( For example attribute PID is redundant)4. Values in attributes might be different (for example for PID 4791 values in second and trio field are different in both the tables)5. Duplicate records under different keys( there is a possibility of replication of same record with different key values)Therefore schema integrati on and object matching can be trickier. gesture here is how equivalent entities from different sources are matched? This problem is known as entity identification problem. Conflicts have to be detected and resolved. Integration becomes easier if unique entity keys are available in all the data sets (or tables) to be linked. Metadata can help in schema integration (example of metadata for each attribute includes the name, meaning, data type and range of values permitted for the attribute)2.2.2.1 RedundancyRedundancy is another important issue in data integration. Two given attribute (such as DOB and age for instance in give table) may be redundant if one is derived form the other attribute or set of attributes. Inconsistencies in attribute or dimension naming can lead to redundancies in the given data sets.Handling redundant DataWe can handle data redundancy problems by following waysn Use correlation analysisn Different tag / representation has to be considered (e.g. metric / im perial measures)n Careful (manual) integration of the data can reduce or prevent redundancies (and inconsistencies)n De-duplication (also called internal data linkage)o If no unique entity keys are availableo Analysis of values in attributes to find duplicatesn Process redundant and inconsistent data (easy if values are the same)o Delete one of the valueso Average values (only for numerical attributes)o Take majority values (if more than 2 duplicates and some values are the same) correlativity analysis is explained in detail here.Correlation analysis (also called Pearsons product moment coefficient) some redundancies can be detected by using correlation analysis. Given two attributes, such analysis can measure how strong one attribute implies another. For numerical attribute we can compute correlation coefficient of two attributes A and B to evaluate the correlation between them. This is given byWheren n is the number of tuples,n and are the respective means of A and Bn A and B are the respective standard deviation of A and Bn (AB) is the sum of the AB cross-product.a. If -1 b. If rA, B is equal to zero it indicates A and B are mugwump of each other and there is no correlation between them.c. If rA, B is less than zero then A and B are negatively gibe. , where if value of one attribute increases value of another attribute decreases. This means that one attribute discourages another attribute.It is important to note that correlation does not imply causality. That is, if A and B are correlated, this does not essentially mean that A causes B or that B causes A. for example in analyzing a demographic database, we may find that attribute representing number of accidents and the number of car theft in a region are correlated. This does not mean that one is related to another. Both may be related to third attribute, namely population.For discrete data, a correlation relation between two attributes, can be discovered by a (chi-square) test. Let A has c distinct value s a1,a2,ac and B has r different values namely b1,b2,br The data tuple described by A and B are shown as contingency table, with c values of A (making up columns) and r values of B( making up rows). Each and every (Ai, Bj) cell in table has.X2 = sum_i=1r sum_j=1c (O_i,j E_i,j)2 over E_i,j .Wheren Oi, j is the observed frequency (i.e. actual count) of joint event (Ai, Bj) andn Ei, j is the expected frequency which can be computed asE_i,j=fracsum_k=1c O_i,k sum_k=1r O_k,jN , ,Wheren N is number of data tuplen Oi,k is number of tuples having value ai for An Ok,j is number of tuples having value bj for BThe larger the value, the more wantly the variables are related. The cells that contribute the most to the value are those whose actual count is very different from the expected countChi-Square Calculation An ExampleSuppose a group of 1,500 people were surveyed. The gender of each person was noted. Each person has polled their preferred type of indication material as fiction or non- fiction. The observed frequency of each possible joint event is summarized in following table.( number in divagation are expected frequencies) . Calculate chi square.Play cheaterNot play chessSum (row)Like science fiction250(90)200(360)450Not like science fiction50(210)1000(840)1050Sum(col.)ccc12001500E11 = count (male)*count(fiction)/N = 300 * 450 / 1500 =90 and so onFor this table the degree of freedom are (2-1)(2-1) =1 as table is 2X2. for 1 degree of freedom , the value infallible to reject the hypothesis at the 0.001 significance level is 10.828 (taken from the table of upper percentage point of the distribution typically available in any statistic text book). Since the computed value is above this, we can reject the hypothesis that gender and preferred reading are independent and conclude that two attributes are strongly correlated for given group.Duplication must also be detected at the tuple level. The use of renormalized tables is also a source of redundancies. Redunda ncies may further lead to data inconsistencies (due to updating some but not others).2.2.2.2 Detection and resolution of data value conflictsAnother significant issue in data integration is the baring and resolution of data value conflicts. For example, for the same entity, attribute values from different sources may differ. For example weight can be stored in metric social unit in one source and British imperial unit in another source. For instance, for a hotel chaData Pre-processing ToolData Pre-processing ToolChapter- 2Real life data rarely comply with the necessities of various data mining tools. It is usually inconsistent and noisy. It may contain redundant attributes, unsuitable formats etc. Hence data has to be prepared vigilantly before the data mining actually starts. It is well known fact that success of a data mining algorithm is very much dependent on the quality of data processing. Data processing is one of the most important tasks in data mining. In this context it i s natural that data pre-processing is a complicated task involving large data sets. Sometimes data pre-processing take more than 50% of the total time spent in solving the data mining problem. It is crucial for data miners to choose efficient data preprocessing technique for specific data set which can not only save processing time but also retain the quality of the data for data mining process.A data pre-processing tool should help miners with many data mining activates. For example, data may be provided in different formats as discussed in previous chapter (flat files, database files etc). Data files may also have different formats of values, calculation of derived attributes, data filters, joined data sets etc. Data mining process generally starts with understanding of data. In this stage pre-processing tools may help with data exploration and data discovery tasks. Data processing includes lots of tedious works,Data pre-processing generally consists ofData CleaningData Integratio nData Transformation AndData Reduction.In this chapter we will study all these data pre-processing activities.2.1 Data UnderstandingIn Data understanding phase the first task is to collect initial data and then proceed with activities in order to get well known with data, to discover data quality problems, to discover first insight into the data or to identify interesting subset to form hypothesis for hidden information. The data understanding phase according to CRISP model can be shown in following .2.1.1 Collect Initial DataThe initial collection of data includes loading of data if required for data understanding. For instance, if specific tool is applied for data understanding, it makes great sense to load your data into this tool. This attempt possibly leads to initial data preparation steps. However if data is obtained from multiple data sources then integration is an additional issue.2.1.2 Describe dataHere the gross or surface properties of the gathered data are examined.2.1. 3 Explore dataThis task is required to handle the data mining questions, which may be addressed using querying, visualization and reporting. These includeSharing of key attributes, for instance the goal attribute of a prediction taskRelations between pairs or small numbers of attributesResults of simple aggregationsProperties of important sub-populationsSimple statistical analyses.2.1.4 Verify data qualityIn this step quality of data is examined. It answers questions such asIs the data complete (does it cover all the cases required)?Is it accurate or does it contains errors and if there are errors how common are they?Are there missing values in the data?If so how are they represented, where do they occur and how common are they?2.2 Data PreprocessingData preprocessing phase focus on the pre-processing steps that produce the data to be mined. Data preparation or preprocessing is one most important step in data mining. Industrial practice indicates that one data is well prepared the m ined results are much more accurate. This means this step is also a very critical fro success of data mining method. Among others, data preparation mainly involves data cleaning, data integration, data transformation, and reduction.2.2.1 Data CleaningData cleaning is also known as data cleansing or scrubbing. It deals with detecting and removing inconsistencies and errors from data in order to get better quality data. While using a single data source such as flat files or databases data quality problems arises due to misspellings while data entry, missing information or other invalid data. While the data is taken from the integration of multiple data sources such as data warehouses, federated database systems or global web-based information systems, the requirement for data cleaning increases significantly. This is because the multiple sources may contain redundant data in different formats. Consolidation of different data formats abs elimination of redundant information becomes nec essary in order to provide access to accurate and consistent data. Good quality data requires passing a set of quality criteria. Those criteria includeAccuracy Accuracy is an aggregated value over the criteria of integrity, consistency and density.Integrity Integrity is an aggregated value over the criteria of completeness and validity.Completeness completeness is achieved by correcting data containing anomalies.Validity Validity is approximated by the amount of data satisfying integrity constraints.Consistency consistency concerns contradictions and syntactical anomalies in data.Uniformity it is directly related to irregularities in data.Density The density is the quotient of missing values in the data and the number of total values ought to be known.Uniqueness uniqueness is related to the number of duplicates present in the data.2.2.1.1 Terms Related to Data CleaningData cleaning data cleaning is the process of detecting, diagnosing, and editing damaged data.Data editing data edit ing means changing the value of data which are incorrect.Data flow data flow is defined as passing of recorded information through succeeding information carriers.Inliers Inliers are data values falling inside the projected range.Outlier outliers are data value falling outside the projected range.Robust estimation evaluation of statistical parameters, using methods that are less responsive to the effect of outliers than more conventional methods are called robust method.2.2.1.2 Definition Data CleaningData cleaning is a process used to identify imprecise, incomplete, or irrational data and then improving the quality through correction of detected errors and omissions. This process may includeformat checksCompleteness checksReasonableness checksLimit checksReview of the data to identify outliers or other errorsAssessment of data by subject area experts (e.g. taxonomic specialists).By this process suspected records are flagged, documented and checked subsequently. And finally these su spected records can be corrected. Sometimes validation checks also involve checking for compliance against applicable standards, rules, and conventions.The general framework for data cleaning given asDefine and determine error typesSearch and identify error instancesCorrect the errorsDocument error instances and error types andModify data entry procedures to reduce future errors.Data cleaning process is referred by different people by a number of terms. It is a matter of preference what one uses. These terms include Error Checking, Error Detection, Data Validation, Data Cleaning, Data Cleansing, Data Scrubbing and Error Correction.We use Data Cleaning to encompass three sub-processes, viz.Data checking and error detectionData validation andError correction.A fourth improvement of the error prevention processes could perhaps be added.2.2.1.3 Problems with DataHere we just note some key problems with dataMissing data This problem occur because of two main reasonsData are absent in source where it is expected to be present.Some times data is present are not available in appropriately formDetecting missing data is usually straightforward and simpler.Erroneous data This problem occurs when a wrong value is recorded for a real world value. Detection of erroneous data can be quite difficult. (For instance the incorrect spelling of a name) Duplicated data This problem occur because of two reasonsRepeated entry of same real world entity with some different valuesSome times a real world entity may have different identifications.Repeat records are regular and frequently easy to detect. The different identification of the same real world entities can be a very hard problem to identify and solve.Heterogeneities When data from different sources are brought together in one analysis problem heterogeneity may occur. Heterogeneity could beStructural heterogeneity arises when the data structures reflect different business usageSemantic heterogeneity arises when the meaning o f data is different n each system that is being combinedHeterogeneities are usually very difficult to resolve since because they usually involve a lot of contextual data that is not well defined as metadata.Information dependencies in the relationship between the different sets of attribute are commonly present. Wrong cleaning mechanisms can further damage the information in the data. Various analysis tools handle these problems in different ways. Commercial offerings are available that assist the cleaning process, but these are often problem specific. Uncertainty in information systems is a well-recognized hard problem. In following a very simple examples of missing and erroneous data is shownExtensive support for data cleaning must be provided by data warehouses. Data warehouses have high probability of dirty data since they load and continuously refresh huge amounts of data from a variety of sources. Since these data warehouses are used for strategic decision making therefore the correctness of their data is important to avoid wrong decisions. The ETL (Extraction, Transformation, and Loading) process for building a data warehouse is illustrated in following .Data transformations are related with schema or data translation and integration, and with filtering and aggregating data to be stored in the data warehouse. All data cleaning is classically performed in a separate data performance area prior to loading the transformed data into the warehouse. A large number of tools of varying functionality are available to support these tasks, but often a significant portion of the cleaning and transformation work has to be done manually or by low-level programs that are difficult to write and maintain.A data cleaning method should assure followingIt should identify and eliminate all major errors and inconsistencies in an individual data sources and also when integrating multiple sources.Data cleaning should be supported by tools to bound manual examination and progra mming effort and it should be extensible so that can cover additional sources.It should be performed in association with schema related data transformations based on metadata.Data cleaning mapping functions should be specified in a declarative way and be reusable for other data sources.2.2.1.4 Data Cleaning Phases1. Analysis To identify errors and inconsistencies in the database there is a need of detailed analysis, which involves both manual inspection and automated analysis programs. This reveals where (most of) the problems are present.2. Defining Transformation and Mapping Rules After discovering the problems, this phase are related with defining the manner by which we are going to automate the solutions to clean the data. We will find various problems that translate to a list of activities as a result of analysis phase.Example Remove all entries for J. Smith because they are duplicates of John Smith Find entries with bule in colour field and change these to blue. Find all recor ds where the Phone number field does not match the pattern (NNNNN NNNNNN). Further steps for cleaning this data are then applied. Etc 3. Verification In this phase we check and assess the transformation plans made in phase- 2. Without this step, we may end up making the data dirtier rather than cleaner. Since data transformation is the main step that actually changes the data itself so there is a need to be sure that the applied transformations will do it correctly. Therefore test and examine the transformation plans very carefully.Example Let we have a very thick C++ book where it says strict in all the places where it should say struct4. Transformation Now if it is sure that cleaning will be done correctly, then apply the transformation verified in last step. For large database, this task is supported by a variety of toolsBackflow of Cleaned Data In a data mining the main objective is to convert and move clean data into target system. This asks for a requirement to purify legacy data. Cleansing can be a complicated process depending on the technique chosen and has to be designed carefully to achieve the objective of removal of dirty data. Some methods to accomplish the task of data cleansing of legacy system includen Automated data cleansingn Manual data cleansingn The combined cleansing process2.2.1.5 Missing ValuesData cleaning addresses a variety of data quality problems, including noise and outliers, inconsistent data, duplicate data, and missing values. Missing values is one important problem to be addressed. Missing value problem occurs because many tuples may have no record for several attributes. For Example there is a customer sales database consisting of a whole bunch of records (lets say around 100,000) where some of the records have certain fields missing. Lets say customer income in sales data may be missing. Goal here is to find a way to predict what the missing data values should be (so that these can be filled) based on the existing data. Mi ssing data may be due to following reasonsEquipment malfunctionInconsistent with other recorded data and thus deletedData not entered due to misunderstandingCertain data may not be considered important at the time of entryNot register history or changes of the dataHow to Handle Missing Values?Dealing with missing values is a regular question that has to do with the actual meaning of the data. There are various methods for handling missing entries1. Ignore the data row. One solution of missing values is to just ignore the entire data row. This is generally done when the class label is not there (here we are assuming that the data mining goal is classification), or many attributes are missing from the row (not just one). But if the percentage of such rows is high we will definitely get a poor performance.2. Use a global constant to fill in for missing values. We can fill in a global constant for missing values such as unknown, N/A or minus infinity. This is done because at times is ju st doesnt make sense to try and predict the missing value. For example if in customer sales database if, say, office address is missing for some, filling it in doesnt make much sense. This method is simple but is not full proof.3. Use attribute mean. Let say if the average income of a a family is X you can use that value to replace missing income values in the customer sales database.4. Use attribute mean for all samples belonging to the same class. Lets say you have a cars pricing DB that, among other things, classifies cars to Luxury and Low budget and youre dealing with missing values in the cost field. Replacing missing cost of a luxury car with the average cost of all luxury cars is probably more accurate then the value youd get if you factor in the low budget5. Use data mining algorithm to predict the value. The value can be determined using regression, inference based tools using Bayesian formalism, decision trees, clustering algorithms etc.2.2.1.6 Noisy DataNoise can be defi ned as a random error or variance in a measured variable. Due to randomness it is very difficult to follow a strategy for noise removal from the data. Real world data is not always faultless. It can suffer from corruption which may impact the interpretations of the data, models created from the data, and decisions made based on the data. Incorrect attribute values could be present because of following reasonsFaulty data collection instrumentsData entry problemsDuplicate recordsIncomplete dataInconsistent dataIncorrect processingData transmission problemsTechnology limitation.Inconsistency in naming conventionOutliersHow to handle Noisy Data?The methods for removing noise from data are as follows.1. Binning this approach first sort data and partition it into (equal-frequency) bins then one can smooth it using- Bin means, smooth using bin median, smooth using bin boundaries, etc.2. Regression in this method smoothing is done by fitting the data into regression functions.3. Clustering clustering detect and remove outliers from the data.4. Combined computer and human inspection in this approach computer detects suspicious values which are then checked by human experts (e.g., this approach deal with possible outliers)..Following methods are explained in detail as followsBinning Data preparation activity that converts continuous data to discrete data by replacing a value from a continuous range with a bin identifier, where each bin represents a range of values. For instance, age can be changed to bins such as 20 or under, 21-40, 41-65 and over 65. Binning methods smooth a sorted data set by consulting values around it. This is therefore called local smoothing. Let consider a binning exampleBinning Methodsn Equal-width (distance) partitioningDivides the range into N intervals of equal size uniform grid if A and B are the lowest and highest values of the attribute, the width of intervals will be W = (B-A)/N.The most straightforward, but outliers may dominate presenta tion Skewed data is not handled welln Equal-depth (frequency) partitioning1. It divides the range (values of a given attribute) into N intervals, each containing approximately same number of samples (elements)2. Good data scaling3. Managing categorical attributes can be tricky.n Smooth by bin means- Each bin value is replaced by the mean of valuesn Smooth by bin medians- Each bin value is replaced by the median of valuesn Smooth by bin boundaries Each bin value is replaced by the closest boundary valueExampleLet Sorted data for price (in dollars) 4, 8, 9, 15, 21, 21, 24, 25, 26, 28, 29, 34n Partition into equal-frequency (equi-depth) binso Bin 1 4, 8, 9, 15o Bin 2 21, 21, 24, 25o Bin 3 26, 28, 29, 34n Smoothing by bin meanso Bin 1 9, 9, 9, 9 ( for example mean of 4, 8, 9, 15 is 9)o Bin 2 23, 23, 23, 23o Bin 3 29, 29, 29, 29n Smoothing by bin boundarieso Bin 1 4, 4, 4, 15o Bin 2 21, 21, 25, 25o Bin 3 26, 26, 26, 34Regression Regression is a DM technique used to fit an equation to a dataset. The simplest form of regression is linear regression which uses the formula of a straight line (y = b+ wx) and determines the suitable values for b and w to predict the value of y based upon a given value of x. Sophisticated techniques, such as multiple regression, permit the use of more than one input variable and allow for the fitting of more complex models, such as a quadratic equation. Regression is further described in subsequent chapter while discussing predictions. Clustering clustering is a method of grouping data into different groups , so that data in each group share similar trends and patterns. Clustering constitute a major class of data mining algorithms. These algorithms automatically partitions the data space into set of regions or cluster. The goal of the process is to find all set of similar examples in data, in some optimal fashion. Following shows three clusters. Values that fall outside the cluster are outliers.4. Combined computer and human inspection T hese methods find the suspicious values using the computer programs and then they are verified by human experts. By this process all outliers are checked.2.2.1.7 Data cleaning as a processData cleaning is the process of Detecting, Diagnosing, and Editing Data. Data cleaning is a three stage method involving repeated cycle of screening, diagnosing, and editing of suspected data abnormalities. Many data errors are detected by the way during study activities. However, it is more efficient to discover inconsistencies by actively searching for them in a planned manner. It is not always right away clear whether a data point is erroneous. Many times it requires careful examination. Likewise, missing values require additional check. Therefore, predefined rules for dealing with errors and true missing and extreme values are part of good practice. One can monitor for suspect features in survey questionnaires, databases, or analysis data. In small studies, with the examiner intimately involved at all stages, there may be small or no difference between a database and an analysis dataset.During as well as after treatment, the diagnostic and treatment phases of cleaning need insight into the sources and types of errors at all stages of the study. Data flow concept is therefore crucial in this respect. After measurement the research data go through repeated steps of- entering into information carriers, extracted, and transferred to other carriers, edited, selected, transformed, summarized, and presented. It is essential to understand that errors can occur at any stage of the data flow, including during data cleaning itself. Most of these problems are due to human error.Inaccuracy of a single data point and measurement may be tolerable, and associated to the inherent technological error of the measurement device. Therefore the process of data clenaning mus focus on those errors that are beyond small technical variations and that form a major shift within or beyond the populat ion distribution. In turn, it must be based on understanding of technical errors and expected ranges of normal values.Some errors are worthy of higher priority, but which ones are most significant is highly study-specific. For instance in most medical epidemiological studies, errors that need to be cleaned, at all costs, include missing gender, gender misspecification, birth date or examination date errors, duplications or merging of records, and biologically impossible results. Another example is in nutrition studies, date errors lead to age errors, which in turn lead to errors in weight-for-age scoring and, further, to misclassification of subjects as under- or overweight. Errors of sex and date are particularly important because they contaminate derived variables. Prioritization is essential if the study is under time pressures or if resources for data cleaning are limited.2.2.2 Data IntegrationThis is a process of taking data from one or more sources and mapping it, field by fi eld, onto a new data structure. Idea is to combine data from multiple sources into a coherent form. Various data mining projects requires data from multiple sources becausen Data may be distributed over different databases or data warehouses. (for example an epidemiological study that needs information about hospital admissions and car accidents)n Sometimes data may be required from different geographic distributions, or there may be need for historical data. (e.g. integrate historical data into a new data warehouse)n There may be a necessity of enhancement of data with additional (external) data. (for improving data mining precision)2.2.2.1 Data Integration IssuesThere are number of issues in data integrations. Consider two database tables. Imagine two database tablesDatabase Table-1Database Table-2In integration of there two tables there are variety of issues involved such as1. The same attribute may have different names (for example in above tables Name and Given Name are same at tributes with different names)2. An attribute may be derived from another (for example attribute Age is derived from attribute DOB)3. Attributes might be redundant( For example attribute PID is redundant)4. Values in attributes might be different (for example for PID 4791 values in second and third field are different in both the tables)5. Duplicate records under different keys( there is a possibility of replication of same record with different key values)Therefore schema integration and object matching can be trickier. Question here is how equivalent entities from different sources are matched? This problem is known as entity identification problem. Conflicts have to be detected and resolved. Integration becomes easier if unique entity keys are available in all the data sets (or tables) to be linked. Metadata can help in schema integration (example of metadata for each attribute includes the name, meaning, data type and range of values permitted for the attribute)2.2.2.1 Redundan cyRedundancy is another important issue in data integration. Two given attribute (such as DOB and age for instance in give table) may be redundant if one is derived form the other attribute or set of attributes. Inconsistencies in attribute or dimension naming can lead to redundancies in the given data sets.Handling Redundant DataWe can handle data redundancy problems by following waysn Use correlation analysisn Different coding / representation has to be considered (e.g. metric / imperial measures)n Careful (manual) integration of the data can reduce or prevent redundancies (and inconsistencies)n De-duplication (also called internal data linkage)o If no unique entity keys are availableo Analysis of values in attributes to find duplicatesn Process redundant and inconsistent data (easy if values are the same)o Delete one of the valueso Average values (only for numerical attributes)o Take majority values (if more than 2 duplicates and some values are the same)Correlation analysis is e xplained in detail here.Correlation analysis (also called Pearsons product moment coefficient) some redundancies can be detected by using correlation analysis. Given two attributes, such analysis can measure how strong one attribute implies another. For numerical attribute we can compute correlation coefficient of two attributes A and B to evaluate the correlation between them. This is given byWheren n is the number of tuples,n and are the respective means of A and Bn A and B are the respective standard deviation of A and Bn (AB) is the sum of the AB cross-product.a. If -1 b. If rA, B is equal to zero it indicates A and B are independent of each other and there is no correlation between them.c. If rA, B is less than zero then A and B are negatively correlated. , where if value of one attribute increases value of another attribute decreases. This means that one attribute discourages another attribute.It is important to note that correlation does not imply causality. That is, if A and B are correlated, this does not essentially mean that A causes B or that B causes A. for example in analyzing a demographic database, we may find that attribute representing number of accidents and the number of car theft in a region are correlated. This does not mean that one is related to another. Both may be related to third attribute, namely population.For discrete data, a correlation relation between two attributes, can be discovered by a (chi-square) test. Let A has c distinct values a1,a2,ac and B has r different values namely b1,b2,br The data tuple described by A and B are shown as contingency table, with c values of A (making up columns) and r values of B( making up rows). Each and every (Ai, Bj) cell in table has.X2 = sum_i=1r sum_j=1c (O_i,j E_i,j)2 over E_i,j .Wheren Oi, j is the observed frequency (i.e. actual count) of joint event (Ai, Bj) andn Ei, j is the expected frequency which can be computed asE_i,j=fracsum_k=1c O_i,k sum_k=1r O_k,jN , ,Wheren N is number of d ata tuplen Oi,k is number of tuples having value ai for An Ok,j is number of tuples having value bj for BThe larger the value, the more likely the variables are related. The cells that contribute the most to the value are those whose actual count is very different from the expected countChi-Square Calculation An ExampleSuppose a group of 1,500 people were surveyed. The gender of each person was noted. Each person has polled their preferred type of reading material as fiction or non-fiction. The observed frequency of each possible joint event is summarized in following table.( number in parenthesis are expected frequencies) . Calculate chi square.Play chessNot play chessSum (row)Like science fiction250(90)200(360)450Not like science fiction50(210)1000(840)1050Sum(col.)30012001500E11 = count (male)*count(fiction)/N = 300 * 450 / 1500 =90 and so onFor this table the degree of freedom are (2-1)(2-1) =1 as table is 2X2. for 1 degree of freedom , the value needed to reject the hypothes is at the 0.001 significance level is 10.828 (taken from the table of upper percentage point of the distribution typically available in any statistic text book). Since the computed value is above this, we can reject the hypothesis that gender and preferred reading are independent and conclude that two attributes are strongly correlated for given group.Duplication must also be detected at the tuple level. The use of renormalized tables is also a source of redundancies. Redundancies may further lead to data inconsistencies (due to updating some but not others).2.2.2.2 Detection and resolution of data value conflictsAnother significant issue in data integration is the discovery and resolution of data value conflicts. For example, for the same entity, attribute values from different sources may differ. For example weight can be stored in metric unit in one source and British imperial unit in another source. For instance, for a hotel cha

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.