Data mining
From Wikipedia, the free encyclopedia
Data mining (DMM), also called Knowledge-Discovery in Databases (KDD) or Knowledge-Discovery and Data Mining, is the process of automatically searching large volumes of data for patterns using tools such as classification, association rule mining, clustering, etc. Data mining is a complex topic and has links with multiple core fields such as computer science and adds value to rich seminal computational techniques from statistics, information retrieval, machine learning and pattern recognition.
Contents |
[edit] Example
A simple example of data mining, often called Market Basket Analysis, is its use for retail sales. If a clothing store records the purchases of customers, a data mining system could identify those customers who favour silk shirts over cotton ones.
Another is that of a supermarket chain who, through analysis of transactions over a long period of time, found that beer and diapers were often bought together. Although explaining this relationship may be difficult, taking advantage of it is easier, for example by placing the high-profit diapers in the store close to the high-profit beers. (This example is questioned at Beer and Nappies -- A Data Mining Urban Legend.)
The two examples above deal with association rules within transaction-based data. Not all data is transaction based and logical or inexact rules may also be present within a database. In a manufacturing application, an inexact rule may state that 73% of products which have a specific defect or problem, will develop a secondary problem within the next 6 months.
[edit] Use of the term
Data mining has been defined as "the nontrivial extraction of implicit, previously unknown, and potentially useful information from data" [1] and "the science of extracting useful information from large data sets or databases" [2].
It involves sorting through large amounts of data and picking out relevant information.
It is usually used by businesses and other organizations, but is increasingly used in the sciences to extract information from the enormous data sets generated by modern experimentation.
Metadata, or data about a given data set, are often expressed in a condensed data mine-able format, or one that facilitates the practice of data mining. Common examples include executive summaries and scientific abstracts.
Although data mining is a relatively new term, the technology is not. Companies for a long time have used powerful computers to sift through volumes of data such as supermarket scanner data, and produce market research reports. Continuous innovations in computer processing power, disk storage, and statistical software are dramatically increasing the accuracy and usefulness of analysis.
Data mining identifies trends within data that go beyond simple analysis. Through the use of sophisticated algorithms, users have the ability to identify key attributes of business processes and target opportunities.
The term data mining is often used to apply to the two separate processes of knowledge discovery and prediction. Knowledge discovery provides explicit information that has a readable form and can be understood by a user. Forecasting, or predictive modeling provides predictions of future events and may be transparent and readable in some approaches (e.g. rule based systems) and opaque in others such as neural networks. Moreover, some data mining systems such as neural networks are inherently geared towards prediction rather than knowledge discovery.
[edit] Misuse of the term
The term "data mining" is often used incorrectly to apply to a variety of other processes besides data mining. For example a popular report mining tool, Monarch by Datawatch, is advertised as a "data mining" tool for extracting information from a text-based report into spreadsheet format. The application does not actually perform any analysis on the data and so is more accurately described as an ETL or data extraction tool.
In many cases, applications may claim to perform "data mining" by automating the creation of charts or graphs with historic trends and analysis. Although this information may be useful and timesaving, it does not fit the traditional definition of data mining, as the application performs no analysis itself and has no understanding of the underlying data. Instead, it relies on templates or predifined macros (created either by programmers or users) to identify of trends, patterns and differences.
A key defining factor for true data mining is that the application itself is performing some real analysis. In almost all cases, this analysis is guided by some degree of user interaction, but it must provide the user some insight that are not readily apparent through simple slicing and dicing. Applications that are not to some degree self-guiding are performing data analysis, not data mining.
[edit] Related terms
Although the term "data mining" is usually used in relation to analysis of data, like artificial intelligence, it is an umbrella term with varied meanings in a wide range of contexts. Unlike data analysis, data mining is not based or focused on an existing model which is to be tested or whose parameters are to be optimized.
In statistical analyses where there is no underlying theoretical model, data mining is often approximated via stepwise regression methods wherein the space of 2k possible relationships between a single outcome variable and k potential explanatory variables is smartly searched. With the advent of parallel computing, it became possible (when k is less than approximately 40) to examine all 2k models. This procedure is called all subsets or exhaustive regression. Some of the first applications of exhaustive regression involved the study of plant data.[3]
[edit] Data dredging
Data dredging or data fishing are terms one may use to criticize someone's data mining efforts when it is felt the patterns or causal relationships discovered are unfounded. In this case the pattern suffers of overfitting on the training data.
Data dredging is the scanning of the data for any relationships, and then when one is found coming up with an interesting explanation. The conclusions may be suspect because data sets with large numbers of variables have by chance some "interesting" relationships. Fred Schwed [4] said:
- "There have always been a considerable number of people who busy themselves examining the last thousand numbers which have appeared on a roulette wheel, in search of some repeating pattern. Sadly enough, they have usually found it."
Nevertheless, determining correlations in investment analysis has proven to be very profitable for statistical arbitrage operations (such as pairs trading strategies), and correlation analysis has shown to be very useful in risk management. Indeed, finding correlations in the financial markets, when done properly, is not the same as finding false patterns in roulette wheels.
Some exploratory data work is always required in any applied statistical analysis to get a feel for the data, so sometimes the line between good statistical practice and data dredging is less than clear.
Most data mining efforts are focused on developing highly detailed models of some large data set. Other researchers have described an alternate method that involves finding the minimal differences between elements in a data set, with the goal of developing simpler models that represent relevant data. [5]
When data sets contain a big set of variables, the level of statistical significance should be proportional to the patterns that were tested. For example, if we test 100 random patterns, it is expected that one of them will be "interesting" with a statistical significance at the 0.01 level.
Cross validation is a common approach to evaluating the fitness of a model generated via data mining, where the data is divided into a training subset and a test subset to respectively build and then test the model. Common cross validation techniques include the holdout method, k-fold cross validation, and the leave-one-out method.
[edit] Privacy concerns
There are also privacy concerns associated with data mining - specifically regarding the source of the data analyzed.
Data mining government or commercial data sets for national security or law enforcement purposes has also raised privacy concerns. [6]
There are many legitimate uses of data mining. For example, a database of prescription drugs taken by a group of people could be used to find combinations of drugs exhibiting harmful interactions. Since any particular combination may occur in only 1 out of 1000 people, a great deal of data would need to be examined to discover such an interaction. A project involving pharmacies could reduce the number of drug reactions and potentially save lives. Unfortunately, there is also a huge potential for abuse of such a database.
Essentially, data mining gives information that would not be available otherwise. It must be properly interpreted to be useful. When the data collected involves individual people, there are many questions concerning privacy, legality, and ethics.[7]
[edit] Combinatorial game data mining
- Data mining from combinatorial game oracles:
Since the early 1960s, with the availability of oracles for certain combinatorial games, also called tablebases (e.g. for 3x3-chess) with any beginning configuration, small-board dots-and-boxes, small-board-hex, and certain endgames in chess, dots-and-boxes, and hex; a new area for data mining has been opened up. This is the extraction of human-usable strategies from these oracles. Current pattern recognition approaches do not seem to fully have the required high level of abstraction in order to be applied successfully. Instead, extensive experimentation with the tablebases combined with an intensive study of tablebase-answers to well designed problems and with knowledge of prior art i.e. pre-tablebase knowledge is used to yield insightful patterns. Berlekamp in dots-and-boxes etc. and John Nunn in chess endgames are notable examples of researchers doing this work, though they were not and are not involved in tablebase generation.
[edit] Notable uses of data mining
- Data mining has been cited as the method by which the U.S. Army unit Able Danger supposedly had identified the 9/11 attack leader, Mohamed Atta, and three other 9/11 hijackers as possible members of an al Qaeda cell operating in the U.S. more than a year before the attack.
- See also: Able Danger, wikinews:U.S. Army intelligence had detected 9/11 terrorists year before, says officer.
- It has been suggested that both the CIA and their Canadian counterparts, CSIS, have put this method of interpreting data to work for them as well[8], although they have not said how.
Of course, two notable pitfalls in this type of justice application are the scarcity of suspect datapoints and the learning capabilities of adversaries. The first issue is based on the simple fact that a handful of suspects within a data set of 200 million people usually yield patterns which are scientifically questionable and often result in pointless investigative efforts. The second issue is based on the fact that as adversaries change strategy, their patterns of past behavior fail to provide clues to future activities. Hence, while data mining may well give useful results when applied to the behavior of customers shopping at discounts stores, its applications within the justice system will for ever be hindered by the scarcity of suspect data and the natural dynamic changes in adversarial strategies.
[edit] See also
- Artificial intelligence
- Bayesian network
- CRISP-DM
- Data analysis
- Data farming
- Descriptive statistics
- Fuzzy logic
- Hypothesis testing
- k-nearest neighbor algorithm
- Machine learning
- Pattern recognition
- Predictive analytics
- Preprocessing
- Statistics
[edit] Structured Data Mining
[edit] Unstructured Data Mining
- Text mining
- Image mining
[edit] Induction algorithms
[edit] Supervised learning
- Artificial neural network
- Boosting
- Decision tree learning
- Linear discriminant analysis
- Logit (in reference to logistic regression)
- Naive Bayes
- Nearest neighbor (pattern recognition)
- Neural network
- Quadratic classifier
- Random forest
- Support Vector Machine
[edit] Unsupervised learning
[edit] Dimensionality reduction
[edit] Application areas
- Business intelligence
- Business performance management
- Discovery science
- Loyalty card
- Cheminformatics
- Bioinformatics
- Intelligence services
[edit] Software
- BrainMaker
- CART
- DMSK (Data-Miner Software Kit)
- DTREG
- Funnelback
- Dr. Boetticher's free Genetic Program
- Dr. Boetticher's free Neural Network
- Insightful Miner
- Java Data Mining (JSR-73, JSR-247)
- KNIME
- KnowledgeSTUDIO / KnowledgeSEEKER
- MATLAB
- Microsoft Analysis Services
- MicroStrategy
- Neural network software
- Oracle Data Mining
- Orange
- PolyAnalyst
- R
- ROOT
- SAS
- SPSS
- STATA
- STATISTICA
- Teradata
- Weka
- YALE
[edit] References
- ^ W. Frawley and G. Piatetsky-Shapiro and C. Matheus (Fall 1992). "Knowledge Discovery in Databases: An Overview". AI Magazine: pp. 213-228. ISSN 0738-4602.
- ^ D. Hand, H. Mannila, P. Smyth (2001). Principles of Data Mining. MIT Press, Cambridge, MA,. ISBN 0-262-08290-X.
- ^ A.G. Ivakhnenko (1970). "Heuristic Self-Organization in Problems of Engineering Cybernetics". Automatica 6: pp.207–219. ISSN 0005-1098. .
- ^ Fred Schwed, Jr (1940). Where Are the Customers' Yachts?. ISBN 0-471-11979-2. .
- ^ T. Menzies, Y. Hu (November 2003). "Data Mining For Very Busy People". IEEE Computer: pp. 18-25. ISSN 0018-9162. .
- ^ K.A. Taipale (December 15, 2003). "Data Mining and Domestic Security: Connecting the Dots to Make Sense of Data". Colum. Sci. & Tech. L. Rev. 5 (2). SSRN 546782 / OCLC 45263753. .
- ^ Chip Pitts (March 15, 2007). "The End of Illegal Domestic Spying? Don't Count on It". Wash. Spec.. .
- ^ Stephen Haag et al.. Management Information Systems for the information age, pp 28. ISBN 0-07-095569-7.
[edit] General References
- Vincent Granville, Ph.D. Click Fraud: New Definition and Methodology to Assess Generic Traffic Quality
- Kurt Thearling, An Introduction to Data Mining (also available is a corresponding online tutorial)
- Dean Abbott, I. Philip Matkovsky, and John Elder IV, Ph.D. An Evaluation of High-end Data Mining Tools for Fraud Detection published a comparative analysis of major high-end data mining software tools that was presented at the 1998 IEEE International Conference on Systems, Man, and Cybernetics, San Diego, CA, October 12-14, 1998.
- Mierswa, Ingo and Wurst, Michael and Klinkenberg, Ralf and Scholz, Martin and Euler, Timm: YALE: Rapid Prototyping for Complex Data Mining Tasks, in Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD-06), 2006.
- Ajith Abraham, Vitorino Ramos, Web Usage Mining Using Artificial Ant Colony Clustering and Genetic Programming, in CEC´03 - Congress on Evolutionary Computation, IEEE Press, ISBN 0780378040, pp.1384-1391, Canberra, Australia, 8-12 Dec. 2003.
- Peng, Y., Kou, G., Shi, Y. and Chen, Z. "A Systemic Framework for the Field of Data Mining and Knowledge Discovery", in Proceeding of workshops on The Sixth IEEE International Conference on Data Mining (ICDM), 2006
- Hari Mailvaganam and Daniel Chen, Articles on Data Mining
- Vitorino Ramos, Ajith Abraham, Evolving a Stigmergic Self-Organized Data-Mining, in ISDA-04, 4th Int. Conf. on Intelligent Systems, Design and Applications, Budapest, Hungary, ISBN 963-7154-30-2, pp. 725-730, August 26-28, 2004.
[edit] Books
- Ajith Abraham, Crina Grosan, Vitorino Ramos (Eds.), Swarm Intelligence in Data Mining, Springer-Verlag, 2006 (Preface and Foreword).
- Peter Cabena, Pablo Hadjnian, Rolf Stadler, Jaap Verhees, Allesandro Zanasi, Discovering Data Mining: From Concept to Implementation (1997), Prentice Hall, ISBN 0137439806
- Ronen Feldman and James Sanger, The Text Mining Handbook, Cambridge University Press, ISBN 9780521836579
- Pang-Ning Tan, Michael Steinbach and Vipin Kumar, Introduction to Data Mining (2005), ISBN 0-321-32136-7 (companion book site)
- Galit Shmueli, Nitin R. Patel and Peter C. Bruce , Data Mining for Business Intelligence (2006), ISBN 0-470-08485-5 (companion book site)
- Richard O. Duda, Peter E. Hart, David G. Stork, Pattern Classification, Wiley Interscience, ISBN 0-471-05669-3, (see also Powerpoint slides)
- Phiroz Bhagat, Pattern Recognition in Industry, Elsevier, ISBN 0-08-044538-1
- Ian Witten and Eibe Frank, Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations (2000), ISBN 1-55860-552-5, (see also Free Weka software)
- Mark F. Hornick, Erik Marcade, Sunil Venkayala: "Java Data Mining: Strategy, Standard, And Practice: A Practical Guide for Architecture, Design, And Implementation" (Broché)
- Weiss and Indurkhya, Predictive Data Mining, Morgan Kaufman
- Yike Guo and Robert Grossman, editors: High Performance Data Mining: Scaling Algorithms, Applications and Systems, Kluwer Academic Publishers, 1999.