missing files

This commit is contained in:
Martin Bauer 2023-01-03 20:49:14 +01:00
parent 804940bfac
commit 828d1b3360
26 changed files with 3639687 additions and 0 deletions

Binary file not shown.

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 1.3 MiB

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,434 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<div>\n",
"<a href=\"http://www.music-processing.de/\"><img style=\"float:left;\" src=\"../data/FMP_Teaser_Cover.png\" width=40% alt=\"FMP\"></a>\n",
"<a href=\"https://www.audiolabs-erlangen.de\"><img src=\"../data/Logo_AudioLabs_Long.png\" width=59% style=\"float: right;\" alt=\"AudioLabs\"></a>\n",
"</div>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<div>\n",
"<a href=\"../C5/C5.html\"><img src=\"../data/C5_nav.png\" width=\"100\" style=\"float:right;\" alt=\"C5\"></a>\n",
"<h1>Hidden Markov Model (HMM)</h1> \n",
"</div>\n",
"\n",
"<br/>\n",
"\n",
"<p>\n",
"Motivated by the chord recognition problem, we give in this notebook an overview of hidden Markov models (HMMs) and introduce three famous algorithmic problems related with HMMs following Section 5.3 of <a href=\"http://www.music-processing.de/\">[Müller, FMP, Springer 2015]</a>. For a detailed introduction of HMMs, we refer to the famous tutorial paper by Rabiner.\n",
"\n",
"<ul>\n",
"<li><span style=\"color:black\">\n",
"Lawrence R. Rabiner: <strong>A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition.</strong> Proceedings of the IEEE, 77 (1989), pp. 257&ndash;286. \n",
"<br> \n",
"<a type=\"button\" class=\"btn btn-default btn-xs\" target=\"_blank\" href=\"../data/bibtex/FMP_bibtex_Rabiner89_HMM_IEEE.txt\"> Bibtex </a>\n",
"</span></li>\n",
"</ul>\n",
"</p> "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Markov Chains\n",
"\n",
"Certain transitions from one chord to another are more likely than others. To capture such likelihoods, one can employ a concept called **Markov chains**. Abstracting from our chord recognition scenario, we assume that the chord types to be considered are represented by a set \n",
"\n",
"\\begin{equation}\n",
" \\mathcal{A}:=\\{\\alpha_{1},\\alpha_{2},\\ldots,\\alpha_{I}\\}\n",
"\\end{equation}\n",
"\n",
"of size $I\\in\\mathbb{N}$. The elements $\\alpha_{i}$ for $i\\in[1:I]$ are referred to as **states**. A progression of chords is realized by a system that can be described at any time instance $n=1,2,3,\\ldots$ as being in some state $s_{n}\\in\\mathcal{A}$. The change from one state to another is specified according to a set of probabilities associated with each state. In general, a probabilistic description of such a system can be quite complex. To simplify the model, one often makes the assumption that the probability of a change from the current state $s_{n}$ to the next state $s_{n+1}$ only depends on the current state, and not on the events that preceded it. In terms of conditional probabilities, this property is expressed by\n",
"\n",
"\\begin{equation}\n",
" P[s_{n+1}=\\alpha_{j}|s_{n}=\\alpha_{i},s_{n-1}=\\alpha_{k},\\ldots]\n",
" = P[s_{n+1}=\\alpha_{j}|s_{n}=\\alpha_{i}].\n",
"\\end{equation}\n",
"\n",
"The specific kind of \"amnesia\" is called the **Markov property**. Besides this property, one also often assumes that the system is **invariant under time shifts**, which means by definition that the following coefficients become independent of the index $n$:\n",
"\n",
"\\begin{equation}\n",
" a_{ij} := P[s_{n+1}=\\alpha_{j} | s_{n}=\\alpha_{i}] \\in [0,1]\n",
"\\end{equation}\n",
"\n",
"for $i,j\\in[1:I]$. These coefficients are also called **state transition probabilities**. They obey the standard stochastic constraint $\\sum_{j=1}^{I} a_{ij} = 1$ and can be expressed by an $(I\\times I)$ matrix, which we denote by $A$. A system that satisfies these properties is also called a (discrete-time) **Markov chain**. The following figure illustrates these definitions. It defines a Markov chain that consists of $I=3$ states $\\alpha_{1}$, $\\alpha_{2}$, and $\\alpha_{3}$, which correspond to the major chords $\\mathbf{C}$, $\\mathbf{G}$, and $\\mathbf{F}$, respectively. In the graph representation, the states correspond to the nodes, the transitions to the edges, and the transition probabilities to the labels attached to the edges. For example, the transition probability to remain in the state $\\alpha_{1}=\\mathbf{C}$ is $a_{11}=0.8$, whereas the transition probability of changing from $\\alpha_{1}=\\mathbf{C}$ to $\\alpha_{2}=\\mathbf{G}$ is $a_{12}=0.1$.\n",
"\n",
"<img src=\"../data/C5/FMP_C5_F24.png\" width=\"550px\" align=\"middle\" alt=\"FMP_C5_F24\">\n",
"\n",
"The model expresses the probability of all possible chord changes. To compute the probability of a given chord progression, one also needs the information on how the model gets started. This information is specified\n",
"by additional model parameters referred to as **initial state probabilities**. For a general Markov chain, these probabilities are specified by the numbers\n",
"\n",
"\\begin{equation}\n",
" c_{i} := P[s_{1}=\\alpha_{i}] \\in [0,1]\n",
"\\end{equation}\n",
"\n",
"for $i\\in[1:I]$. These coefficients, which sum up to one, can be expressed by a vector of length $I$ denoted by $C$."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Hidden Markov Models\n",
"\n",
"Based on a Markov chain, one can compute a probability for a given observation consisting of a sequence of states or chord types. In our [chord recognition scenario](../C5/C5S2_ChordRec_Templates.html), however, this is not what we need. Rather than observing a sequence of chord types, we observe a **sequence of chroma vectors** that are somehow related to the chord types. In other words, the state sequence is not directly visible, but only a fuzzier observation sequence that is generated based on the state sequence. This leads to an extension of Markov chains to a statistical model referred to as a **hidden Markov model** (HMM). The idea is to represent the relation between the observed feature vectors and the chord types (the states) using a probabilistic framework. Each state is equipped with a probability function that expresses the likelihood for a given chord type to output or emit a certain feature vector. As a result, we obtain a two-layered process consisting of a **hidden layer** and an **observable layer**. The hidden layer produces a state sequence that is not observable (\"hidden\"), but generates the observation sequence on the basis of the state-dependent probability functions.\n",
"\n",
"The **first layer** of an HMM is a **Markov chain** as introduced above. To define the second layer of an HMM, we need to specify a space of possible output values and a probability function for each state. In general, the output space can be any set including the real numbers, a vector space, or any kind of feature space. For example, in the case of chord recognition, this space may be modeled as the feature space $\\mathcal{F}=\\mathbb{R}^{12}$ consisting of all possible $12$-dimensional chroma vectors. For the sake of simplicity, we only consider the case of a **discrete HMM**, where the output space is assumed to be discrete and even finite. In this case, the space can be modeled as a finite set \n",
"\n",
"\\begin{equation}\n",
" \\mathcal{B} = \\{\\beta_{1},\\beta_{2},\\ldots,\\beta_{K}\\} \n",
"\\end{equation}\n",
"\n",
"of size $K\\in\\mathbb{N}$ consisting of distinct output elements $\\beta_{k}$, $k\\in[1:K]$, which are also referred to as **observation symbols**. An HMM associates with each state a probability function, which is also referred to as the **emission probability** or **output probability**. In the discrete case, the emission probabilities are specified by coefficients\n",
"\n",
"\\begin{equation}\n",
" b_{ik}\\in[0,1]\n",
"\\end{equation}\n",
"\n",
"for $i\\in[1:I]$ and $k\\in[1:K]$. Each coefficient $b_{ik}$ expresses the probability of the system to output the observation symbol $\\beta_{k}$ when in state $\\alpha_{i}$. Similarly to the state transition probabilities, the emission probabilities are required to satisfy the stochastic constraint $\\sum_{k=1}^{K} \\beta_{ik} = 1$ for $i\\in[1:I]$ (thus forming a probability distribution for each state). The coefficients can be expressed by an $(I\\times K)$ matrix, which we denote by $B$. In summary, an HMM is specified by a tuple\n",
"\n",
"\\begin{equation}\n",
" \\Theta:=(\\mathcal{A},A,C,\\mathcal{B},B).\n",
"\\end{equation}\n",
"\n",
"The sets $\\mathcal{A}$ and $\\mathcal{B}$ are usually considered to be fixed components of the model, while the probability values specified by $A$, $B$, and $C$ are the free parameters to be determined. This can be done explicitly by an expert based on his or her musical knowledge or by employing a learning procedure based on suitably labeled training data. Continuing the above example, the following figure illustrates a hidden Markov model, where the state-dependent emission probabilities are indicated by the labels of the dashed arrows.\n",
"\n",
"<img src=\"../data/C5/FMP_C5_F25.png\" width=\"400px\" align=\"middle\" alt=\"FMP_C5_F25\">"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In the following code cell, we define the state transition probability matrix $A$ and the output probability $B$ as specified by the figure. \n",
"\n",
"* Here, we assume that $\\alpha_{1}=\\mathbf{C}$, $\\alpha_{2}=\\mathbf{G}$, and $\\alpha_{3}=\\mathbf{F}$. \n",
"* Furthermore, the elements of the output space $\\mathcal{B} = \\{\\beta_{1},\\beta_{2},\\beta_{3}\\}$ represent the three chroma vectors ordered from left to right. \n",
"* Finally, we assume that the initial state probability vector $C$ is given by the values $c_{1}=0.6$, $c_{2}=0.2$, $c_{3}=0.2$."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"from sklearn.preprocessing import normalize \n",
"\n",
"A = np.array([[0.8, 0.1, 0.1], \n",
" [0.2, 0.7, 0.1], \n",
" [0.1, 0.3, 0.6]])\n",
"\n",
"C = np.array([0.6, 0.2, 0.2])\n",
"\n",
"B = np.array([[0.7, 0.0, 0.3], \n",
" [0.1, 0.9, 0.0], \n",
" [0.0, 0.2, 0.8]])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## HMM-Based Sequence Generation \n",
"\n",
"Once an HMM is specified by $\\Theta:=(\\mathcal{A},A,C,\\mathcal{B},B)$, it can be used for various analysis and synthesis applications. Since it is very instructive, we now discuss how to (artificially) generate, on the basis of a given HMM, an observation sequence $O=(o_{1},o_{2},\\ldots,o_{N})$ of length $N\\in\\mathbb{N}$ with $o_n\\in \\mathcal{B}$, $n\\in[1:N]$. The generation procedure is as follows:\n",
"\n",
"1. Set $n=1$ and choose an initial state $s_n=\\alpha_i$ for some $i\\in[1:I]$ according to the initial state distribution $C$.\n",
"2. Generate an observation $o_n=\\beta_k$ for some $k\\in[1:K]$ according to the emission probability in state $s_n=\\alpha_i$ (specified by the $i^{\\mathrm{th}}$ row of $B$).\n",
"3. If $n=N$ then terminate the process. Otherwise, if $n<N$, transit to the new state $s_{n+1}=\\alpha_{j}$ according to the state transition probability at state $s_n=\\alpha_i$ (specified by the $i^{\\mathrm{th}}$ row of $A$). Then increase $n$ by one and return to step 2.\n",
"\n",
"In the next code cell, we implement this procedure and apply it to the example HMM specified above. Note that, due to Python conventions, we start in our implementation with index `0`."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"n = 0, S[0] = 0, O[0] = 0\n",
"n = 1, S[1] = 0, O[1] = 0\n",
"n = 2, S[2] = 0, O[2] = 2\n",
"n = 3, S[3] = 0, O[3] = 0\n",
"n = 4, S[4] = 0, O[4] = 0\n",
"n = 5, S[5] = 0, O[5] = 0\n",
"n = 6, S[6] = 0, O[6] = 2\n",
"n = 7, S[7] = 0, O[7] = 0\n",
"n = 8, S[8] = 0, O[8] = 0\n",
"n = 9, S[9] = 0, O[9] = 0\n",
"State sequence S: [0 0 0 0 0 0 0 0 0 0]\n",
"Observation sequence O: [0 0 2 0 0 0 2 0 0 0]\n"
]
}
],
"source": [
"def generate_sequence_hmm(N, A, C, B, details=False):\n",
" \"\"\"Generate observation and state sequence from given HMM\n",
"\n",
" Notebook: C5/C5S3_HiddenMarkovModel.ipynb\n",
"\n",
" Args:\n",
" N (int): Number of observations to be generated\n",
" A (np.ndarray): State transition probability matrix of dimension I x I\n",
" C (np.ndarray): Initial state distribution of dimension I\n",
" B (np.ndarray): Output probability matrix of dimension I x K\n",
" details (bool): If \"True\" then shows details (Default value = False)\n",
"\n",
" Returns:\n",
" O (np.ndarray): Observation sequence of length N\n",
" S (np.ndarray): State sequence of length N\n",
" \"\"\"\n",
" assert N > 0, \"N should be at least one\"\n",
" I = A.shape[1]\n",
" K = B.shape[1]\n",
" assert I == A.shape[0], \"A should be an I-square matrix\"\n",
" assert I == C.shape[0], \"Dimension of C should be I\"\n",
" assert I == B.shape[0], \"Column-dimension of B should be I\"\n",
"\n",
" O = np.zeros(N, int)\n",
" S = np.zeros(N, int)\n",
" for n in range(N):\n",
" if n == 0:\n",
" i = np.random.choice(np.arange(I), p=C)\n",
" else:\n",
" i = np.random.choice(np.arange(I), p=A[i, :])\n",
" k = np.random.choice(np.arange(K), p=B[i, :])\n",
" S[n] = i\n",
" O[n] = k\n",
" if details:\n",
" print('n = %d, S[%d] = %d, O[%d] = %d' % (n, n, S[n], n, O[n]))\n",
" return O, S\n",
"\n",
"N = 10\n",
"O, S = generate_sequence_hmm(N, A, C, B, details=True)\n",
"print('State sequence S: ', S)\n",
"print('Observation sequence O:', O)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As a sanity check for the plausibility of our sequence generation approach, we now check if the generated sequences reflect well the probabilities of our HMM. To this end, we estimate the original transition probability matrix $A$ and the output probability matrix $B$ from a generated observation sequence $O$ and state sequence $S$.\n",
"\n",
"* To obtain an estimate of the entry $a_{ij}$ of $A$, we count all transitions from $n$ to $n+1$ with $S(n)=\\alpha_i$ and $S(n+1)=\\alpha_j$ and then divide this number by the total number of transitions starting with $\\alpha_i$.\n",
"\n",
"* Similarly, to obtain an estimate of the entry $b_{ik}$ of $B$, we count the number of occurrences $n$ with $S(n)=\\alpha_i$ and $O(n)=\\beta_k$ and divide this number by the total number of occurrences of $\\alpha_i$ in $S$.\n",
"\n",
"When generating longer sequences by increasing the number $N$, the resulting estimates should approach the original values in $A$ and $B$. This is demonstrated by the subsequent experiment. \n",
"\n",
"<div class=\"alert alert-block alert-warning\">\n",
"Note: In practice, when estimating HMM model parameters from training data, only <strong>observation sequences</strong> are typically available, and the state sequences (that reflect the hidden generation process) are generally not known. Learning parameters only from observation sequences leads to much harder estimation problems as discussed below. \n",
"</div> "
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"======== Estimation results when using N = 100 ========\n",
"A =\n",
"[[ 0.800 0.100 0.100]\n",
" [ 0.200 0.700 0.100]\n",
" [ 0.100 0.300 0.600]]\n",
"A_est =\n",
"[[ 0.795 0.091 0.114]\n",
" [ 0.172 0.655 0.172]\n",
" [ 0.154 0.269 0.577]]\n",
"B =\n",
"[[ 0.700 0.000 0.300]\n",
" [ 0.100 0.900 0.000]\n",
" [ 0.000 0.200 0.800]]\n",
"B_est =\n",
"[[ 0.705 0.000 0.295]\n",
" [ 0.167 0.833 0.000]\n",
" [ 0.000 0.423 0.577]]\n",
"======== Estimation results when using N = 10000 ========\n",
"A =\n",
"[[ 0.800 0.100 0.100]\n",
" [ 0.200 0.700 0.100]\n",
" [ 0.100 0.300 0.600]]\n",
"A_est =\n",
"[[ 0.799 0.097 0.104]\n",
" [ 0.198 0.696 0.106]\n",
" [ 0.097 0.306 0.597]]\n",
"B =\n",
"[[ 0.700 0.000 0.300]\n",
" [ 0.100 0.900 0.000]\n",
" [ 0.000 0.200 0.800]]\n",
"B_est =\n",
"[[ 0.708 0.000 0.292]\n",
" [ 0.103 0.897 0.000]\n",
" [ 0.000 0.205 0.795]]\n"
]
}
],
"source": [
"def estimate_hmm_from_o_s(O, S, I, K):\n",
" \"\"\"Estimate the state transition and output probability matrices from\n",
" a given observation and state sequence\n",
"\n",
" Notebook: C5/C5S3_HiddenMarkovModel.ipynb\n",
"\n",
" Args:\n",
" O (np.ndarray): Observation sequence of length N\n",
" S (np.ndarray): State sequence of length N\n",
" I (int): Number of states\n",
" K (int): Number of observation symbols\n",
"\n",
" Returns:\n",
" A_est (np.ndarray): State transition probability matrix of dimension I x I\n",
" B_est (np.ndarray): Output probability matrix of dimension I x K\n",
" \"\"\"\n",
" # Estimate A\n",
" A_est = np.zeros([I, I])\n",
" N = len(S)\n",
" for n in range(N-1):\n",
" i = S[n]\n",
" j = S[n+1]\n",
" A_est[i, j] += 1\n",
" A_est = normalize(A_est, axis=1, norm='l1')\n",
"\n",
" # Estimate B\n",
" B_est = np.zeros([I, K])\n",
" for i in range(I):\n",
" for k in range(K):\n",
" B_est[i, k] = np.sum(np.logical_and(S == i, O == k))\n",
" B_est = normalize(B_est, axis=1, norm='l1')\n",
" return A_est, B_est\n",
"\n",
"N = 100\n",
"print('======== Estimation results when using N = %d ========' % N)\n",
"O, S = generate_sequence_hmm(N, A, C, B, details=False)\n",
"A_est, B_est = estimate_hmm_from_o_s(O, S, A.shape[1], B.shape[1])\n",
"np.set_printoptions(formatter={'float': \"{: 7.3f}\".format})\n",
"print('A =', A, sep='\\n')\n",
"print('A_est =', A_est, sep='\\n')\n",
"print('B =', B, sep='\\n')\n",
"print('B_est =', B_est, sep='\\n')\n",
"\n",
"N = 10000\n",
"print('======== Estimation results when using N = %d ========' % N)\n",
"O, S = generate_sequence_hmm(N, A, C, B, details=False)\n",
"A_est, B_est = estimate_hmm_from_o_s(O, S, A.shape[1], B.shape[1])\n",
"np.set_printoptions(formatter={'float': \"{: 7.3f}\".format})\n",
"print('A =', A, sep='\\n')\n",
"print('A_est =', A_est, sep='\\n')\n",
"print('B =', B, sep='\\n')\n",
"print('B_est =', B_est, sep='\\n')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"## Three Problems for HMMs\n",
"\n",
"We have seen how a given HMM can be used to generate an observation sequence. We will now look at three famous algorithmic problems for HMMs that concern the specification of the free model parameters and the evaluation of observation sequences. \n",
"\n",
"### 1. Evaluation Problem\n",
"\n",
"The first problem is known as **evaluation problem**. Given an HMM specified by $\\Theta=(\\mathcal{A},A,C,\\mathcal{B},B)$ and an observation sequence $O=(o_{1},o_{2},\\ldots,o_{N})$, the task is to compute the probability \n",
"\n",
"\\begin{equation}\n",
" P[O|\\Theta]\n",
"\\end{equation}\n",
"\n",
"of the observation sequence given the model. From a slightly different viewpoint, this probability can be regarded as a score value that expresses how well a given model matches a given observation sequence. This interpretation becomes useful in the case where one is trying to choose among several competing models. The solution would then be to choose the model which best matches the observation sequence. To compute $P[O|\\Theta]$, we first consider a fixed state sequence $S=(s_1,s_2,\\ldots,s_N)$ of length $N$ with $s_n=\\alpha_{i_n}\\in\\mathcal{A}$ for some suitable $i_n\\in[1:I]$, $n\\in[1:N]$. The probability $P[O,S|\\Theta]$ for generating the state sequence $S$ as well as the observation sequence $O$ is given by \n",
"\n",
"$$\n",
"P[O,S|\\Theta] = c_{i_1}\\cdot b_{i_1k_1} \\cdot a_{i_1i_2}\\cdot b_{i_2k_2} \\cdot ...\\cdot a_{i_{N-1}i_N}\\cdot b_{i_Nk_N}\n",
"$$\n",
"\n",
"Next, to obtain the overall probability $P[O|\\Theta]$, one needs to sum up all these probabilities considering all possible state sequences $S$ of length $|S|=N$:\n",
"\n",
"$$\n",
"P[O|\\Theta] = \\sum_{S: |S|=N}P[O,S|\\Theta]\n",
"= \\sum_{i_1=1}^I \\sum_{i_2=1}^I \\ldots \\sum_{i_N=1}^I\n",
"c_{i_1}\\cdot b_{i_1k_1} \\cdot a_{i_1i_2}\\cdot b_{i_2k_2} \\cdot ...\\cdot a_{i_{N-1}i_N}\\cdot b_{i_Nk_N}\n",
"$$\n",
"\n",
"This leads to $I^N$ summands, a number that is exponential in the length $N$ of the observation sequence. Therefore, in practice, this brute-force calculation is computationally infeasible even for a small $N$. The good news is that there is a more efficient way to compute $P[O|\\Theta]$ using an algorithm that is based on the dynamic programming paradigm. This procedure, which is known as [**Forward&ndash;Backward Algorithm**](https://en.wikipedia.org/wiki/Forward%E2%80%93backward_algorithm), requires a number of operations on the order of $I^2N$ (instead of $I^N$). For a detailed description of this algorithm, we refer to the article by [Rabiner](https://ieeexplore.ieee.org/document/18626).\n",
"\n",
"\n",
"### 2. Uncovering Problem\n",
"\n",
"The second problem is the so-called **uncovering problem**. Again, we are given an HMM specified by $\\Theta=(\\mathcal{A},A,C,\\mathcal{B},B)$ and an observation sequence $O=(o_{1},o_{2},\\ldots,o_{N})$. Instead of finding the overall probability $P[O|\\Theta]$ for $O$, where one needs to consider **all** possible state sequences, the goal of the uncovering problem is to find the **single** state sequence $S=(s_{1},s_{2},\\ldots,s_{N})$ that \"best explains\" the observation sequence. The uncovering problem stated so far is not well defined since, in general, there is not a single \"correct\" state sequence generating the observation sequence. Indeed, one needs a kind of optimization criterion that specifies what is meant when talking about a best possible explanation. There are several reasonable choices for such a criterion, and the actual choice will depend on the intended application. In the [FMP notebook on the Viterbi algorithm](../C5/C5S3_Viterbi.html), we will discuss one possible choice as well as an efficient algorithm (called **Viterbi algorithm**). This algorithm, which can be thought of as a kind of context-sensitive smoothing procedure, will apply in the [FMP notebook on HMM-based chord recognition](../C5/C5S3_ChordRec_HMM.html). \n",
"\n",
"\n",
"### 3. Estimation Problem\n",
"\n",
"Besides the evaluation and uncovering problems, the third basic problem for HMMs is referred to as the **estimation problem**. Given an observation sequence $O$, the objective is to determine the free model parameters of $\\Theta$ (specified by by $A$, $C$, and $B$) that maximize the probability $P[O|\\Theta]$. In other words, the free model parameters are to be estimated so as to best describe the observation sequence. This is a typical instance of an **optimization problem** where a set of observation sequences serves as **training material** for adjusting or learning the HMM parameters. The estimation problem is by far the most difficult problem of HMMs. In fact, there is no known way to explicitly solve the given optimization problem. However, iterative procedures that find locally optimal solutions have been suggested. One of these procedures is known as the [**Baum&ndash;Welch Algorithm**](https://en.wikipedia.org/wiki/Baum%E2%80%93Welch_algorithm). Again, we refer to the article by [Rabiner](https://ieeexplore.ieee.org/document/18626) for more details. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<div class=\"alert\" style=\"background-color:#F5F5F5; border-color:#C8C8C8\">\n",
"<strong>Acknowledgment:</strong> This notebook was created by <a href=\"https://www.audiolabs-erlangen.de/fau/professor/mueller\">Meinard Müller</a>.\n",
"</div> "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<table style=\"border:none\">\n",
"<tr style=\"border:none\">\n",
" <td style=\"min-width:50px; border:none\" bgcolor=\"white\"><a href=\"../C0/C0.html\"><img src=\"../data/C0_nav.png\" style=\"height:50px\" alt=\"C0\"></a></td>\n",
" <td style=\"min-width:50px; border:none\" bgcolor=\"white\"><a href=\"../C1/C1.html\"><img src=\"../data/C1_nav.png\" style=\"height:50px\" alt=\"C1\"></a></td>\n",
" <td style=\"min-width:50px; border:none\" bgcolor=\"white\"><a href=\"../C2/C2.html\"><img src=\"../data/C2_nav.png\" style=\"height:50px\" alt=\"C2\"></a></td>\n",
" <td style=\"min-width:50px; border:none\" bgcolor=\"white\"><a href=\"../C3/C3.html\"><img src=\"../data/C3_nav.png\" style=\"height:50px\" alt=\"C3\"></a></td>\n",
" <td style=\"min-width:50px; border:none\" bgcolor=\"white\"><a href=\"../C4/C4.html\"><img src=\"../data/C4_nav.png\" style=\"height:50px\" alt=\"C4\"></a></td>\n",
" <td style=\"min-width:50px; border:none\" bgcolor=\"white\"><a href=\"../C5/C5.html\"><img src=\"../data/C5_nav.png\" style=\"height:50px\" alt=\"C5\"></a></td>\n",
" <td style=\"min-width:50px; border:none\" bgcolor=\"white\"><a href=\"../C6/C6.html\"><img src=\"../data/C6_nav.png\" style=\"height:50px\" alt=\"C6\"></a></td>\n",
" <td style=\"min-width:50px; border:none\" bgcolor=\"white\"><a href=\"../C7/C7.html\"><img src=\"../data/C7_nav.png\" style=\"height:50px\" alt=\"C7\"></a></td>\n",
" <td style=\"min-width:50px; border:none\" bgcolor=\"white\"><a href=\"../C8/C8.html\"><img src=\"../data/C8_nav.png\" style=\"height:50px\" alt=\"C8\"></a></td>\n",
"</tr>\n",
"</table>"
]
}
],
"metadata": {
"anaconda-cloud": {},
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,47 @@
"""Some simple tests/examples for the Home Assistant client."""
import asyncio
import logging
import sys
from hass_client import HomeAssistantClient
LOGGER = logging.getLogger()
if __name__ == "__main__":
logformat = logging.Formatter(
"%(asctime)-15s %(levelname)-5s %(name)s.%(module)s -- %(message)s")
consolehandler = logging.StreamHandler()
consolehandler.setFormatter(logformat)
LOGGER.addHandler(consolehandler)
LOGGER.setLevel(logging.DEBUG)
if len(sys.argv) < 3:
LOGGER.error("usage: test.py <url> <token>")
sys.exit()
url = sys.argv[1]
token = sys.argv[2]
loop = asyncio.get_event_loop()
hass = HomeAssistantClient(url, token)
async def hass_event(event, event_details):
"""Handle hass event callback."""
LOGGER.info("received event %s --> %s\n", event, event_details)
hass.register_event_callback(hass_event)
async def run():
"""Run tests."""
await hass.async_connect()
await asyncio.sleep(10)
await hass.async_close()
loop.stop()
try:
loop.create_task(run())
loop.run_forever()
except KeyboardInterrupt:
loop.stop()
loop.close()

View File

@ -0,0 +1,3 @@
pyserial-asyncio==0.6
python-vlc==3.0.12118
hass-client==0.1.2

Binary file not shown.

File diff suppressed because one or more lines are too long

40
espmusicmouse/todo.md Normal file
View File

@ -0,0 +1,40 @@
- button hintergrund beleuchtung [ok]
- playlisten
- runterladen
- befestigung im regal
- winkel
- mehrfachsteckdose
- lan kabel
- effekt kanal fuer audioeffekte
- "boing" etc runterladen
- Fernbedienung wenn empfaenger da
- HA regeln fuer standard
- ansible cleanup
- lirc
- musicmouse kanal
- musicmouse effect kanal
- home assistant anbindung
- events an HA (figur, button press, ...)
- mouse & ring leds von HA
- HA device control (led fluter, rollos)
- regal licht von HA aus
- Regal LEDs
- kabel von musikmaus
- Leisten zuschneiden
- kabel auf richtige laenge zuschneiden
- Kabel loeten
- im Arbeitszimmer testen
- Bonus: Ecken drucken
- Effekte Regal LEDs
- Musik-abhaengige Effekte

BIN
figures/elephant.blend Normal file

Binary file not shown.

133640
figures/elephant.obj Normal file

File diff suppressed because it is too large Load Diff

BIN
figures/omnom.blend Normal file

Binary file not shown.

694529
figures/omnom.obj Normal file

File diff suppressed because it is too large Load Diff

696208
figures/omnom2.obj Normal file

File diff suppressed because it is too large Load Diff

BIN
figures/puppy.blend Normal file

Binary file not shown.

61345
figures/puppy.obj Normal file

File diff suppressed because it is too large Load Diff

101023
figures/puppy2.obj Normal file

File diff suppressed because it is too large Load Diff

BIN
figures/rabbit_ducky.blend Normal file

Binary file not shown.

110816
figures/rabbit_ducky.obj Normal file

File diff suppressed because it is too large Load Diff

BIN
figures/snowman.blend Normal file

Binary file not shown.

291311
figures/snowman.obj Normal file

File diff suppressed because it is too large Load Diff

1548458
figures/squirrel.obj Normal file

File diff suppressed because it is too large Load Diff

BIN
figures/squirrel.stl Normal file

Binary file not shown.

BIN
power_consumption_leds.ods Normal file

Binary file not shown.