Study form: Fulltime
Course language: Czech
The course explains machine learning methods helpful for getting insight into data by automatically discovering interpretable data models such as graph- and rule-based. The course will also address a theoretical framework explaining why/when the explained algorithms can in principle be expected to work. The lectures are given in English.
clustering, frequent patterns, classifier, PAC-learning
1. Course introduction. Cluster analysis -- foundations (k-means, hierarchical and EM clustering).
2. Cluster analysis -- advanced methods (spectral clustering).
3. Cluster analysis -- special methods (conceptual and semi-supervised clustering, co-clustering).
4. Frequent itemset mining. the Apriori algorithm, association rules.
5. Frequent sequence mining. Episode rules. Sequence models.
6. Frequent subtrees and subgraphs.
7. Dimensionality reduction.
8. Computational learning theory - intro, PAC learning.
9. Computational learning theory (cont'd).
10. PAC-learning logic forms.
11. Learning in predicate logic.
12. Infinite Concept Spaces.
13. Empirical testing of hypotheses.
14. Wrapping up (if 14 lectures).
1. Entry test (prerequisite course RPZ). SW tools for machine learning (RapidMiner, WEKA).
2. Data preprocessing, missing and outlying values, clustering.
3. Hierarchical clustering, principal component analysis.
4. Spectral cluestering.
5. Frequent itemset mining, association rules
6. Frequent sequence/subgraph mining.
7. Test (first half of the course). Learning Curve.
8. Underfitting and overfitting, ensemble classification, error estimates, cross-validation.
9. Model selection and assessment, ROC analysis.
10. Project work.
11. Project work.
12. Inductive logic programming: the Aleph system.
13. Statistical relational learning: the Alchemy system.
T. Mitchell: Machine Learning, McGraw Hill, 1997
P. Langley: Elements of Machine Learning, Morgan Kaufman 1996
T. Hastie et al: The elements of Statistical Learning, Springer 2001