Merge branch 'main' of https://git.47c.in/arc/notes
This commit is contained in:
commit
03e27c281a
2
.obsidian/plugins/obsidian-git/data.json
vendored
2
.obsidian/plugins/obsidian-git/data.json
vendored
@ -2,7 +2,7 @@
|
||||
"commitMessage": "vault backup: {{date}}",
|
||||
"autoCommitMessage": "vault backup: {{date}}",
|
||||
"commitDateFormat": "YYYY-MM-DD HH:mm:ss",
|
||||
"autoSaveInterval": 1,
|
||||
"autoSaveInterval": 5,
|
||||
"autoPushInterval": 0,
|
||||
"autoPullInterval": 5,
|
||||
"autoPullOnBoot": false,
|
||||
|
10
education/math/Inverse Functions.md
Normal file
10
education/math/Inverse Functions.md
Normal file
@ -0,0 +1,10 @@
|
||||
For a function to have an inverse, it needs to have one $x$ for every $y$, and vice versa. You can use the horizontal line test to verify that the inverse of a function is valid. If you can draw a horizontal line and it crosses through two points at the same time at any height, the inverse is not a valid function. To get the inverse, you can switch the x and y of a function, and it will mirror the graph over the line $y = x$.
|
||||
|
||||
# Examples
|
||||
Given the below function:
|
||||
$$ y = \frac{1}{2}x + 3 $$
|
||||
You can find the inverse by switching the $x$ and $y$ values and solving for $y$:
|
||||
$$ x = \frac{1}{2}y + 3 $$
|
||||
The range of the inverse is the same as the domain of the original.
|
||||
You can verify by taking $f \circ g$, and simplifying.
|
||||
|
@ -3,7 +3,7 @@
|
||||
For $(f\circ g)(x)$ for two sets, you look for $x$ from $f$ and an equivalent $y$ value from $g$, and leftover coordinates are the answer. The order of $f$ and $g$ does matter.
|
||||
# Formulae
|
||||
The general equation for a circle:
|
||||
$$ (x - h)^2 + (y - k)^2 =r $$
|
||||
$$ (x - h)^2 + (y - k)^2 =r^2 $$
|
||||
Distance formula:
|
||||
$$ \sqrt{(x_2-x_1)^2 + (y_2-y_1)^2} $$
|
||||
Midpoint foruma:
|
||||
|
@ -29,9 +29,9 @@ If $x$ is above average, we expect the $y$ to be above average if there's a stro
|
||||
## Calculating $r$ by hand
|
||||
Put the $x$ values into $L1$, put the $y$ values into $L2$.
|
||||
|
||||
1. Convert the $x$ each x value in the list to standard units($z$). Convert each $y$ value to standard units.
|
||||
1. Convert the $x$ each x value in the list to standard units($z$). Convert each $y$ value to standard units. This will create two new tables containing $z_x$ and $z_y$.
|
||||
$$ z = \frac{x-\bar{x}}{\sigma_x} $$
|
||||
2. Multiply the standard units for each ($x$, $y$) pair in the sets, giving you a third list, named $p$ in this example.
|
||||
2. Multiply the standard units for each ($z_x$, $z_y$) pair in the sets, giving you a fifth list, named $p$ in this example.
|
||||
$$ x * y = p$$
|
||||
3. Find the average of the values from step 3, this is $r$.
|
||||
$$ \bar{x}(p) $$
|
||||
@ -82,10 +82,29 @@ Given a scatter diagram where the average of each set lies on the point $(75, 70
|
||||
|
||||
### The Regression Line/Least Squared Regression Line (LSRL)
|
||||
- This line has a more moderate slope than the SD line. it does not go through the peaks of the "football"
|
||||
- Predictions can only be made if the data displays a linear association (is a football shape).
|
||||
- The regression line is *used to predict* the y variable when the x variable is given
|
||||
- The regression line also goes through the point of averages
|
||||
- In regression, the $x$ variable is the known variable, and $y$ is the value being solved for.
|
||||
- The regression line goes through the point of averages, and can be positive or negative
|
||||
$$ slope = r(\frac{\sigma_y}{\sigma_x}) $$
|
||||
- You can find the regression line by multiplying $\sigma_y$ by $r$, for the rise, then using $\sigma_x$ for the run from the point of averages.
|
||||
|
||||
$$ z_x = \frac{x-\bar{x}}{\sigma_x} * r * \sigma_y + \bar{y} $$
|
||||
This formula finds the $z$ score for $x$, transforms by $r$, and uses the equation $x = z * \sigma + \bar{x}$ to predict a value for one axis given another axis.
|
||||
The below formula can be used to predict a y value given a 5 number summary of a set.
|
||||
$$ \hat{y} = \frac{x-\bar{x}}{\sigma_x} * r * \sigma_y + \bar{y} $$
|
||||
1. Find $z_x$
|
||||
2. Multiply $z_x$ by $r$
|
||||
3. Multiply that by $\sigma_y$
|
||||
4. Add the average of $y$
|
||||
|
||||
- For a positive association, for every $\sigma_x$ above average we are in $x$, the line predicts $y$ to be $\sigma_y$ standard deviations above y.x
|
||||
- There are two separate regression lines, one for predicting $y$ from $x$, and one for predicting $x$ from $y$
|
||||
- Do not extrapolate outside of the graph
|
||||
### The Regression Effect
|
||||
- In a test-retest situation, people with low scores tend to improve, and people with high scores tend to do worse. This means that individuals score closer to the average as they retest.
|
||||
- The regression *fallacy* is contributing this to something other than chance error.
|
||||
|
||||
---
|
||||
# Terminology
|
||||
| Term | Definition |
|
||||
| -- | -- |
|
||||
| $\hat{y}$ | The predicted value |
|
||||
|
Loading…
Reference in New Issue
Block a user