├── .gitignore ├── TODO.txt ├── ai ├── ai-a-modern-approach.md ├── alpha-go.md ├── alpha-zero.md ├── cs188.md ├── deep-learning-take2.md ├── deep-learning.md ├── fast.txt ├── foundations-of-deep-learning.md ├── hmms │ ├── hmm.py │ └── hmm_test.py ├── hypothetical-ai-course-curriculum.md ├── kernel-trick.txt ├── list-of-topics.md ├── machine-learning.md ├── mining-massive-datasets.md ├── mixture-of-gaussians.md ├── naive-bayes-vs-logistic-regression.md ├── neural-nets.md ├── probablistic-graphical-models.md ├── probablistic-robotics.txt ├── reinforcement-learning.md ├── self-driving-car.md ├── support-vector-machines │ ├── clean-latex │ ├── make-latex │ └── support-vector-machines.tex └── vae.txt ├── algos ├── advanced-arrays-and-deques.md ├── art-of-multiprocessor.md ├── bloom-filter.md ├── bloom-filter2.md ├── closest-pair-of-points.md ├── concurrent-hash-map.md ├── dynamic-programming.md ├── esoteric-data-structures.md ├── graphs.md ├── hash-map-tricks.md ├── heaps.md ├── linear-merge-sort-falacy.md ├── linked-list.md ├── lock-free-algos.md ├── lsh.md ├── mutex.md ├── persistent-data-structures.md ├── randomized-algos.md ├── reservoir-sampling.md ├── sorts │ ├── integer-sorts.md │ ├── sorts-parallel2.md │ ├── sorts.c │ ├── sorts.md │ └── sorts2.c ├── string-matching.md ├── tractability.md ├── trees.md ├── tries.md └── tries2.md ├── databases ├── acid.md ├── cluster-databases.md ├── column-stores.md ├── database-systems-13-secondary-storage.md ├── database-systems-14-index-structures.md ├── database-systems-15-query-execution.md ├── database-systems-16-query-compiler.md ├── database-systems-17-coping-with-system-failures.md ├── database-systems-18-concurrency-control.md ├── database-systems-19-more-transaction-management.md ├── database-systems-20-parallel-and-distributed-databases.md ├── db-index.md ├── indexes-00-b-trees.md ├── indexes-01-log-structured-merge-tree.md ├── indexes-02-fractal-trees.md ├── inverted-index.md ├── multiversion-concurrency-control.md ├── olap-oltp.md ├── postgres-indices.md ├── principles-of-distributed-database-systems.md ├── scaling.md ├── serializable-snapshot-isolation.md ├── sort-vs-hash-join.md └── two-phase-locking.md ├── distributed-systems ├── amazon-services.md ├── big-table.md ├── byzantine-fault-tolerance-2-pbft.md ├── byzantine-fault-tolerance.md ├── cockroach.md ├── course-websites.md ├── crdt.md ├── datastores.md ├── dia-notes.md ├── distributed-algorithms-part-1-synchronous-network-algorithms.md ├── distributed-consensus.md ├── distributed-file-system.md ├── distributed-systems-03-networking.md ├── distributed-systems-14-time-and-global-states.md ├── distributed-systems-15-coordination-and-agreement.md ├── distributed-transactions.md ├── facebook.md ├── faunadb.md ├── flp-result.md ├── google-file-system.md ├── kafka.md ├── lamport-clocks.md ├── locking.md ├── map-reduce.md ├── marz-book.md ├── operational-transform.md ├── redis.md ├── spanner.md ├── spark.md └── time-clocks-ordering.md ├── lib ├── clean-latex └── make_latex.py ├── math ├── abstract-algebra │ ├── 01-basic-group-definitions-and-properties.tex │ ├── 02-subgroups.tex │ ├── 03-cyclic-groups-and-subgroups.tex │ ├── 03b-finite-fields.tex │ ├── 04-quotient-groups.tex │ ├── abstract-algebra.tex │ ├── clean-latex │ └── make-latex ├── bayes.md ├── calculus │ ├── TODO.md │ ├── aa-fubinis-theorem.md │ ├── xx-integral-of-gaussian-function.md │ └── yy-dominated-convergence-theorem.md ├── casella-berger-notes.md ├── chaos │ ├── chaos.tex │ ├── clean-latex │ └── make-latex ├── clt │ ├── clean-latex │ ├── clt.tex │ └── make-latex ├── conjugate-gradient-descent.md ├── cos-of-sum.JPG ├── diffie-hellman.md ├── direct-sum-vs-direct-product │ ├── clean-latex │ ├── direct-sum-vs-direct-product.tex │ └── make-latex ├── div-grad-curl │ ├── clean-latex │ ├── div-grad-curl.tex │ └── make-latex ├── eigenbasis-notes.md ├── eigenvector-decomposition-2 │ ├── clean-latex │ ├── eigenvector-decomposition-2.tex │ └── make-latex ├── eigenvector-decomposition-3 │ ├── 01-abstract.tex │ ├── 02-proposition.tex │ ├── 03-original-attempt.tex │ ├── 04-contrast-with-gaussian-elimination.tex │ ├── 05-exact-algorithm-for-eigenvectors.tex │ ├── 06-the-corrected-proof-optimization-goal.tex │ ├── 07-how-rotations-change-a.tex │ ├── 08-first-order-conditions.tex │ ├── 09-calculation-of-derivative-for-cross-terms.tex │ ├── 10-summary-thus-far.tex │ ├── 11-second-order-conditions.tex │ ├── 12-conclusion.tex │ ├── clean-latex │ ├── eigenvector-decomposition-3.tex │ ├── make-latex │ └── xx-an-example.tex ├── eigenvector-decomposition-intuition │ ├── clean-latex │ ├── eigenvector-decomposition-intuition.tex │ └── make-latex ├── eigenvector-decomposition.md ├── enderton.md ├── fourier-tex │ ├── 01-definitions.tex │ ├── 02-purpose.tex │ ├── 03-linear-algebra-review.tex │ ├── 04-relationship-to-linear-algebra.tex │ ├── 05-our-inner-product.tex │ ├── 06-adding-cosines-in.tex │ ├── 07-complex-sinusoidals.tex │ ├── 08-topology.tex │ ├── 09-aperiodic-functions.tex │ ├── 10-mass-vs-density.tex │ ├── 11-fourier-densities.tex │ ├── clean-latex │ ├── fourier.tex │ └── make-latex ├── hesse-form-of-line.md ├── kalman-filter │ ├── 01-the-gaussian-integral.tex │ ├── 02-definition-of-normal.tex │ ├── 03-mean-of-normal.tex │ ├── 04-variance-of-normal.tex │ ├── 05-mle-estimate-for-normal-mean.tex │ ├── 06-mle-estimate-for-normal-variance.tex │ ├── 07-the-posterior-distribution-of-mu.tex │ ├── 07a-definitions.tex │ ├── 07b-choosing-our-prior.tex │ ├── 07c-first-exercise.tex │ ├── 07d-second-exercise.tex │ ├── 07e-calculation-of-mumuxp.tex │ ├── 07f-calculation-of-varmuxp.tex │ ├── 07g-posterior-wrapup.tex │ ├── 08-the-multivariate-normal-distribution.tex │ ├── 09-mvn-posterior.tex │ ├── 10-kalman-filter.tex │ ├── 10a-definition.tex │ ├── 10b-prediction.tex │ ├── 10c-covariance-update.tex │ ├── 10d-innovation-format.tex │ ├── 10e-mean-update.tex │ ├── clean-latex │ ├── kalman-filter.tex │ └── make-latex ├── lagrange │ ├── clean-latex │ ├── lagrange.tex │ └── make-latex ├── linear-programming.txt ├── logistic_regression_model.md ├── momentum.md ├── newtons-method.md ├── ordinality.md ├── ordinary-differential-equations │ ├── clean-latex │ ├── make-latex │ └── ordinary-differential-equations.tex ├── proof-of-quadratic-formulas.md ├── quadratic-optimization.md ├── quaternions │ ├── clean-latex │ ├── make-latex │ └── quaternions.tex ├── riemann-hypothesis │ ├── clean-latex │ ├── make-latex │ └── riemann-hypothesis.tex ├── rsa │ ├── 01-lagranges-theorem.tex │ ├── 02-facts-of-z-mod-p.tex │ ├── 03-fermats-little-theorem.tex │ ├── 04-facts-of-z-mod-n.tex │ ├── 05-eulers-totient-theorem.tex │ ├── 06-rsa.tex │ ├── 07-euclidean-algorithm.tex │ ├── 08-eulers-totient-formula.tex │ ├── 09-factoring-the-cyclic-group-zmodn.tex │ ├── 10-factoring-the-ring-zmodn.tex │ ├── 11-factoring-zmodnx.tex │ ├── 12-zmodpx-is-cyclic.tex │ ├── 13-carmichaels-totient-function.tex │ ├── clean-latex │ ├── make-latex │ ├── rsa.tex │ ├── xx-alternative-fermat-proofs.md │ └── xx-chains.md ├── set-theory.md ├── singular-value-decomposition.md ├── space-of-sinusoidal-functions.md ├── strang-algebra-md │ ├── 01a-story-of-the-dot-product.md │ ├── 01b-linear-transformations.md │ ├── 01c-inversion-of-an-orthogonal-matrix.md │ ├── 02a-gaussian-elimination.md │ ├── 02b-lu-decomposition.md │ ├── 02c-geometric-intuition-of-lu-decomposition.md │ ├── 02d-computing-elimination.md │ ├── 02e-elementary-matrices.md │ ├── 02f-transpose.md │ ├── 03a-column-and-null-spaces.md │ ├── 03b-calculation-of-null-space.md │ ├── 03c-full-inversion.md │ ├── 03d-more-vector-spaces.md │ ├── 04a-orthogonal-subspaces.md │ ├── 04b-simplest-projections.md │ ├── 04c-projection-into-subspaces.md │ ├── 04d-more-about-projections.md │ ├── 04e-projections-are-symmetric.md │ ├── 04f-projections-without-orthogonality.md │ ├── 04g-least-squares-approximations.md │ ├── 04h-b-data-matrix-whitening.md │ ├── 04h-least-squares-intuition.md │ ├── 04i-gram-schmidt.md │ ├── 05a-determinants.md │ ├── 06a-eigenvectors-and-values.md │ ├── 06b-differential-equations.md │ ├── 06c-symmetric-matrices.md │ └── 06x-orthogonally-diagonalizable-implies-symmetric.txt ├── strang-algebra │ ├── 01a-story-of-the-dot-product.tex │ ├── 01b-linear-transformations.tex │ ├── 01c-inversion-of-an-orthogonal-matrix.tex │ ├── 02a-gaussian-elimination.tex │ ├── 02b-lu-decomposition.tex │ ├── 02c-geometric-intuition-of-lu-decomposition.tex │ ├── 02d-computing-elimination.tex │ ├── 02e-elementary-matrices.tex │ ├── 02f-transpose.tex │ ├── 03a-column-and-null-spaces.tex │ ├── 03b-calculation-of-null-space.tex │ ├── 03c-full-inversion.tex │ ├── 03d-more-vector-spaces.tex │ ├── 04a-orthogonal-subspaces.tex │ ├── 04b-simplest-projections.tex │ ├── 04c-projection-into-subspaces.tex │ ├── 04d-more-about-projections.tex │ ├── 04e-projections-are-symmetric.tex │ ├── 04f-projections-without-orthogonality.tex │ ├── 04g-least-squares-approximations.tex │ ├── 04h-b-data-matrix-whitening.tex │ ├── 04h-least-squares-intuition.tex │ ├── 04i-gram-schmidt.tex │ ├── 05a-determinants.tex │ ├── 06a-eigenvectors-and-values.md │ ├── 06b-differential-equations.md │ ├── 06c-symmetric-matrices.md │ ├── 06x-orthogonally-diagonalizable-implies-symmetric.txt │ ├── clean-latex │ ├── make-latex │ └── strang-algebra.tex ├── strang.md ├── stys │ ├── ned-abstract-algebra.sty │ ├── ned-calculus.sty │ ├── ned-common.sty │ ├── ned-common │ │ ├── fractions-powers.sty │ │ ├── misc.sty │ │ ├── parens-fns.sty │ │ ├── parens.sty │ │ ├── sets.sty │ │ └── symbols.sty │ ├── ned-linear-algebra.sty │ ├── ned-linear-algebra │ │ ├── matrices.sty │ │ ├── operators.sty │ │ ├── subscripts.sty │ │ └── vectors.sty │ ├── ned-nedqn.sty │ ├── ned-stats.sty │ └── ned-stats │ │ ├── ned-stats-constants.sty │ │ ├── ned-stats-gaussians.sty │ │ ├── ned-stats-operators.sty │ │ ├── ned-stats-probability.sty │ │ └── ned-stats-symbols.sty ├── trigonometry-and-complex-numbers.md └── variational-inference.md ├── programming-languages ├── all-the-casts.md ├── c++11-features.md ├── clojure-notes-2.md ├── clojure-notes.md ├── compilers.md ├── concurrency.md ├── condition_var.cpp ├── cpp_notes.md ├── diamond-inheritance.cc ├── exec-fork-etc.md ├── features.md ├── gadt_example.hs ├── garbage-collection.md ├── generators.js ├── generators.md ├── green-threads.md ├── haskell-quick-sort.hs ├── haskell.md ├── haskell │ ├── 00-data-functor.md │ ├── 01-control-applicative.md │ ├── 02-data-semigroup.md │ ├── 03-data-monoid.md │ ├── 04-data-foldable.md │ ├── 05-control-monad.md │ ├── 05b-knights-travails.md │ ├── 05c-alternative.md │ ├── 06-control-monad-writer.md │ ├── 07-control-monad-state.md │ ├── 08-data-either.md │ ├── 09-various-monad-functions.md │ ├── 09b-foldl-and-foldr.md │ ├── 10-probabilistic-map.hs │ ├── 11-st-monad.md │ ├── xx-todo.md │ └── yy-summary.md ├── interface-dispatch.cc ├── interface-dispatch.md ├── interpreters.md ├── java-evolution.md ├── list.rs ├── monads.hs ├── multi_parameter_type_class.hs ├── paredit-notes.md ├── postgresql-features.md ├── programming-languages2.md ├── python.md ├── rank_n_example.hs ├── reagents.md ├── rust │ ├── interesting-libraries.txt │ ├── rust-by-example.md │ └── the-rust-programming-language-notes.md ├── scala-notes.md ├── spawn.js ├── spawn.rb ├── standard-library.md ├── tagged-ptrs.md ├── unique_ptr.cpp └── visitor-pattern.md ├── randos ├── 1password.md ├── aaronson.md ├── architecture-problems.md ├── arrow.md ├── autoconf.md ├── automotive │ ├── all-wheel-drive.md │ ├── brakes.md │ ├── differential.md │ ├── electric-car-charging.md │ ├── engines │ │ ├── 01-single-cylinder.md │ │ ├── 02a-inline-twin.md │ │ ├── 02b-flat-twin.md │ │ ├── 02c-v-twin.md │ │ ├── 02d-other-v-twins.md │ │ ├── 03-triple.md │ │ ├── 04a-inline-four.md │ │ ├── 04b-flat-four.md │ │ ├── 04c-v-four.md │ │ ├── 05-inline-five.md │ │ ├── 06a-inline-six.md │ │ ├── 06b-v-six.md │ │ ├── 06c-flat-six.md │ │ ├── 08-v-eight.md │ │ ├── 10-v-ten.md │ │ ├── 12-v-twelve.md │ │ ├── TODO.md │ │ ├── engine-balancing.md │ │ ├── engine-configurations.md │ │ ├── gear-shifting.md │ │ ├── horsepower-and-torque.md │ │ └── transmission.md │ ├── motorcycling-overview.md │ ├── suspension.md │ ├── tire-sizes.md │ └── turbocharging.md ├── aws-gpu-instance-setup.md ├── beginner-programming-resources-problems-writeup.md ├── beginner-programming-resources-problems.md ├── beginner-programming-resources.md ├── bicycles │ ├── alfine-igh-exploded-view.pdf │ ├── bike-manufacturers.md │ ├── bottom-bracket.md │ ├── brakes.md │ ├── cables.md │ ├── derailleur-gearing.md │ ├── drivetrain.md │ ├── gearhubs.md │ ├── lubrication-and-cleaning.md │ ├── maintenance-log.md │ ├── my-bicycle-specifications.md │ ├── new-bike-options.md │ ├── pedals.md │ ├── shifters.md │ ├── shimano-ranges.md │ └── tires-and-tubes.md ├── bittorrent.md ├── books.md ├── cache-coherency.md ├── car-keys.md ├── chess-notes.md ├── computer-science-problem-areas.md ├── consciousness.md ├── console-input.md ├── course-catalog.md ├── cpu-architecture.md ├── crypto.md ├── cryptocurrency │ ├── bitcoin.md │ ├── blocksize.md │ ├── lightning.md │ ├── new-ideas.md │ ├── proof-of-stake-resources.md │ └── proof-of-stake.md ├── cuda │ ├── cuda-basics-slides.md │ ├── cuda-programming-guide.md │ ├── even-easier-introduction.md │ ├── stencil.cu │ └── vector.cu ├── denard.md ├── descent.md ├── dns-records.md ├── docker.txt ├── electrical-outlets-and-plugs-by-country.md ├── emscripten.md ├── endianness.md ├── error-correcting-codes │ ├── clean-latex │ ├── error-correcting-codes.tex │ └── make-latex ├── fake-github-commit.txt ├── finance │ ├── american-depositary-receipt.md │ ├── bank-account-types.md │ ├── condos-and-coops.md │ ├── dutch-disease.md │ ├── eurodollars.md │ ├── fund-types.md │ ├── futures.md │ ├── inflation-hedges.md │ ├── options.md │ ├── revolver.md │ └── startup-equity.md ├── fuses.md ├── game-theory.md ├── geforce-now.md ├── gpu-notes.md ├── heating-and-cooling.md ├── http2.md ├── info-retrieval.md ├── insane-computer.md ├── inventions.md ├── journals-papers-people.md ├── latex │ ├── amsmath-user-guide.md │ ├── latex-for-beginners.md │ ├── latex.md │ ├── mathmode.md │ ├── not-so-short-introduction.md │ └── unicode-fonts-in-latex.md ├── lightbulbs.md ├── linking.md ├── lockpicking │ ├── clean-latex │ ├── lockpicking.tex │ └── make-latex ├── make-electronics │ ├── clean-latex │ ├── make-electronics.tex │ └── make-latex ├── makefiles.md ├── maybe-more-books.md ├── messaging-apps.md ├── music-theory │ ├── clean-latex │ ├── make-latex │ └── music-theory.tex ├── ned-electronics │ ├── clean-latex │ ├── make-latex │ └── ned-electronics.tex ├── online-courses.md ├── online-electircal-engineering-courses.md ├── papers │ ├── bigtable.pdf │ ├── chubby.pdf │ ├── dynamo.pdf │ ├── f1.pdf │ ├── gfs.pdf │ ├── mapreduce.pdf │ ├── megastore.pdf │ ├── percolator.pdf │ └── spanner.pdf ├── passkeys.md ├── peripherals │ ├── displays.md │ ├── hdcp.md │ ├── keyboards.md │ ├── laptops-i-have-owned.md │ ├── sd-cards.md │ ├── thunderbolt.md │ └── usb.md ├── physics │ ├── am-vs-fm.txt │ ├── art-of-electronics.md │ ├── cells-and-batteries.md │ ├── drude-model.md │ ├── electrical-generation.md │ ├── electrical-transformers.md │ ├── electricity.md │ ├── electricity2.md │ ├── endothermic-and-exothermic-reactions.md │ ├── feynman │ │ ├── 01-atoms-in-motion │ │ │ ├── 01-atoms-in-motion.tex │ │ │ └── 01a-atoms-in-motion.tex │ │ ├── 02-basic-physics │ │ │ ├── 02-basic-physics.tex │ │ │ └── 02a-basic-physics.tex │ │ ├── 03-the-relation-of-physics-to-other-sciences │ │ │ ├── 03-the-relation-of-physics-to-other-sciences.tex │ │ │ └── 03a-the-relation-of-physics-to-other-sciences.tex │ │ ├── 04-conservation-of-energy │ │ │ ├── 04-conservation-of-energy.tex │ │ │ └── 04a-conservation-of-energy.tex │ │ ├── 05-time-and-distance │ │ │ ├── 05-time-and-distance.tex │ │ │ └── 05a-time-and-distance.tex │ │ ├── 06-probability │ │ │ ├── 06-probability.tex │ │ │ └── 06a-probability.tex │ │ ├── 07-the-theory-of-gravitation │ │ │ ├── 07-the-theory-of-gravitation.tex │ │ │ └── 07a-the-theory-of-gravitation.tex │ │ ├── 08-motion │ │ │ ├── 08-motion.tex │ │ │ └── 08a-motion.tex │ │ ├── 09-newtons-laws-of-dynamics │ │ │ ├── 09-newtons-laws-of-dynamics.tex │ │ │ └── 09a-newtons-laws-of-dynamics.tex │ │ ├── 10-conservation-of-momentum │ │ │ ├── 10-conservation-of-momentum.tex │ │ │ └── 10a-conservation-of-momentum.tex │ │ ├── 11-vectors │ │ │ ├── 11-vectors.tex │ │ │ └── 11a-vectors.tex │ │ ├── 12-characteristics-of-force │ │ │ ├── 12-characteristics-of-force.tex │ │ │ └── 12a-characteristics-of-force.tex │ │ ├── 13-work-and-potential-energy-a │ │ │ ├── 13-work-and-potential-energy-a.tex │ │ │ └── 13a-work-and-potential-energy-a.tex │ │ ├── 14-work-and-potential-energy-conclusion │ │ │ ├── 14-work-and-potential-energy-conclusion.tex │ │ │ └── 14a-work-and-potential-energy-conclusion.tex │ │ ├── 15-the-special-theory-of-relativity │ │ │ ├── 15-the-special-theory-of-relativity.tex │ │ │ ├── 15a-principles.tex │ │ │ ├── 15b-michelson-morley.tex │ │ │ ├── 15c-lorentzian-transformation.tex │ │ │ ├── 15d-time-dilation.tex │ │ │ ├── 15e-simultaneity.tex │ │ │ ├── 15f-space-time.tex │ │ │ ├── 15g-relativistic-mass.tex │ │ │ ├── 15h-relativistic-energy.tex │ │ │ └── 15i-pedagogical-note.tex │ │ ├── 16-relativistic-energy-and-momentum │ │ │ ├── 16-relativistic-energy-and-momentum.tex │ │ │ ├── 16a-relativity-and-accelerations.tex │ │ │ ├── 16b-transformation-of-velocities.tex │ │ │ ├── 16c-derivation-of-relativistic-mass.tex │ │ │ ├── 16d-relativistic-energy.tex │ │ │ └── 16e-potential-energy.tex │ │ ├── 17-space-time │ │ │ ├── 17-space-time.tex │ │ │ ├── 17a-minkowski-space.tex │ │ │ ├── 17b-four-momentum.tex │ │ │ ├── 17c-photons.tex │ │ │ └── 17d-proper-time.tex │ │ ├── 18-rotation-in-two-dimensions │ │ │ ├── 18-rotation-in-two-dimensions.tex │ │ │ ├── 18-xx-lever-arm-diagram.jpeg │ │ │ ├── 18a-rigid-bodies.tex │ │ │ ├── 18b-total-translational-force.tex │ │ │ ├── 18c-rotation-and-torque.tex │ │ │ ├── 18d-conservation-of-angular-momentum.tex │ │ │ └── 18e-total-rotational-inertia.tex │ │ ├── 19-center-of-mass-moment-of-inertia │ │ │ ├── 19-center-of-mass-moment-of-inertia.tex │ │ │ ├── 19a-properties-of-com.tex │ │ │ ├── 19b-center-of-gravity.tex │ │ │ ├── 19c-com-and-inertia-calculations.tex │ │ │ ├── 19d-rotational-kinetic-energy.tex │ │ │ ├── 19e-coriolis-force-for-radial-travel.tex │ │ │ └── 19f-coriolis-force-for-tangential-travel.tex │ │ ├── 20-rotation-in-space │ │ │ ├── 20-rotation-in-space.tex │ │ │ └── 20a-rotation-in-space.tex │ │ ├── 21-the-harmonic-oscillator │ │ │ ├── 21-the-harmonic-oscillator.tex │ │ │ └── 21a-the-harmonic-oscillator.tex │ │ ├── 39-the-kinetic-theory-of-gases │ │ │ ├── 39-the-kinetic-theory-of-gases.tex │ │ │ └── 39a-the-kinetic-theory-of-gases.tex │ │ ├── clean-latex │ │ ├── common │ │ │ └── preamble.tex │ │ ├── feynman.tex │ │ └── make-latex │ ├── gfci.md │ ├── ground-vs-neutral.md │ ├── hydraulic-pressure.md │ ├── mechanics.md │ ├── mechanics │ │ ├── 00-gravitation.tex │ │ ├── 01-momentum.tex │ │ ├── 02-energy.tex │ │ ├── clean-latex │ │ ├── make-latex │ │ └── mechanics.tex │ ├── microwave-oven-spot-welder.md │ ├── nand-to-tetris.md │ ├── power-generation.md │ └── power-inverter.md ├── pickups.md ├── prisoners-dilemma.txt ├── pumps.md ├── quantum-tex │ ├── 01-basics.tex │ ├── 02-superposition-entanglement.tex │ ├── 03-multi-qubit-gates.tex │ ├── 04-epr-pair-production.tex │ ├── 05-no-cloning.tex │ ├── 06-hadamard-in-depth.tex │ ├── 07-key-distribution.md │ ├── 07-quantum-teleportation.tex │ ├── 08-simons-algorithm.md │ ├── 09-grovers-algorithm.md │ ├── 10-graphical-grovers.md │ ├── clean-latex │ ├── make-latex │ ├── quantum.tex │ └── xx-sources.md ├── quantum │ ├── 00-basics.md │ ├── 01-superposition-entanglement.md │ ├── 02-multi-qubit-gates.md │ ├── 03-no-cloning.md │ ├── 04-epr-pair-production.md │ ├── 05-hadamard-in-depth.md │ ├── 06-quantum-teleportation.md │ ├── 07-key-distribution.md │ ├── 08-simons-algorithm.md │ ├── 09-grovers-algorithm.md │ ├── 10-graphical-grovers.md │ └── xx-sources.md ├── rainbow-tables.md ├── read-copy-update.md ├── remote-x11-access.txt ├── research-over-the-years.txt ├── rooting.txt ├── rsync.md ├── ssd.md ├── ssh-setup.md ├── theorem-proving-books.md ├── turbojets.md ├── unicode.txt ├── zero-knowledge-proof.md └── zk-snarks.md └── web ├── architecture-lecture.md ├── caching.md ├── cors.md ├── cors2.md ├── express.md ├── graphQL.md ├── js-features.md ├── js-loading.md ├── module-loading.md ├── play-framework.md ├── production-rails.md ├── react.md ├── react2.md ├── react3.md ├── react4.md ├── redcarpet_rouge.md └── resque.md /.gitignore: -------------------------------------------------------------------------------- 1 | # Latex junk 2 | *.aux 3 | *.fdb_latexmk 4 | *.fls 5 | *.log 6 | *.out 7 | *.pdf 8 | *.gz 9 | -------------------------------------------------------------------------------- /ai/fast.txt: -------------------------------------------------------------------------------- 1 | ## Resources 2 | 3 | * http://course.fast.ai/lessons/lessons.html 4 | * https://github.com/fastai/courses/tree/master/deeplearning1/nbs 5 | * http://wiki.fast.ai/ 6 | -------------------------------------------------------------------------------- /ai/hmms/hmm_test.py: -------------------------------------------------------------------------------- 1 | import hmm 2 | import numpy as np 3 | 4 | def make_test_model(transition_matrix, emission_matrix): 5 | return hmm.Model( 6 | num_hidden_states = 3, 7 | num_emission_values = 3, 8 | initial_state_probabilities = np.array([1.0, 0.0, 0.0]), 9 | transition_matrix = transition_matrix, 10 | emission_matrix = emission_matrix, 11 | ) 12 | 13 | def cycle_matrix(): 14 | return np.array([ 15 | [0, 0, 1], 16 | [1, 0, 0], 17 | [0, 1, 0], 18 | ], dtype = np.float32) 19 | 20 | def test_basic_generation(): 21 | model = make_test_model( 22 | cycle_matrix(), 23 | cycle_matrix() 24 | ) 25 | 26 | (hidden_states, observations) = hmm.run_model(model, 5) 27 | assert((hidden_states == np.array([ 28 | 0, 1, 2, 0, 1 29 | ])).all()) 30 | 31 | assert((observations == np.array([ 32 | 1, 2, 0, 1, 2 33 | ])).all()) 34 | 35 | def test_basic_forward(): 36 | model = make_test_model( 37 | cycle_matrix(), 38 | cycle_matrix() 39 | ) 40 | 41 | num_steps = 2 42 | (hidden_states, observations) = hmm.run_model(model, num_steps) 43 | filter_estimates = hmm.forward_probabilities(model, observations) 44 | assert(len(filter_estimates) == (num_steps + 1)) 45 | 46 | print(filter_estimates[0]) 47 | assert(( 48 | filter_estimates[0] == np.array([1.0, 0.0, 0.0]) 49 | ).all()) 50 | 51 | test_basic_generation() 52 | test_basic_forward() 53 | -------------------------------------------------------------------------------- /ai/list-of-topics.md: -------------------------------------------------------------------------------- 1 | * Artificial Intelligence 2 | * Clustering: graphical models, LSH 3 | * Colaborative filtering (NMF) 4 | * Decision Trees, Random Forests, Boosting, Bagging 5 | * Graphical Models (Bayesian Nets, Naive Bayes, Markov Fields, HMM) 6 | * Variable elimination. 7 | * Belief propagation. 8 | * Gibbs sampling. 9 | * MCMC. 10 | * Max likelihood, dirichlet 11 | * Nearest Neighbors 12 | * Neural Nets, Deep Learning 13 | * FFNN 14 | * Autoencoder 15 | * RNN 16 | * Convolutional NN 17 | * Boltzman machine 18 | * Self-organizing map 19 | * NLP 20 | * Regression (logistic, multinomial, perceptron) 21 | * Reinforcement Learning 22 | * Representation of Knowledge 23 | * Search 24 | * A-star, Alpha-beta 25 | * Constraint solving 26 | * Support Vector Machines 27 | * Machine Learning 28 | * Control Theory (Kalman, optimal control) 29 | * DSP 30 | * Optimization (convex, linear, nonlinear, gradient descent) 31 | * Theorem Proving, Symbolic Logic 32 | * Applications 33 | * Information Retrieval 34 | * Planning (constraint satistfaction) 35 | * Robotics 36 | * Speech Recognition 37 | * Vision 38 | -------------------------------------------------------------------------------- /ai/mixture-of-gaussians.md: -------------------------------------------------------------------------------- 1 | Typically, *k-means* just means doing an iterative process to find 2 | points that are the mean of the points assigned to the cluster. It's 3 | basically doing EM for solving a mixture of gaussians, except (1) the 4 | Gaussian is required to have a symmetric covariance matrix (i.e., its 5 | slope is the same across all dimensions) and (2) deterministic 6 | assignment of a point to clusters. 7 | 8 | In mixture-models, you can have asymmetric gaussians (i.e., condition 9 | 1 does not hold). And, you can give each point a weight in each 10 | cluster, even if its very small. This would be done proportional to 11 | the likelihood of that point in the cluster. This is smoother than 12 | deterministic assignment. 13 | 14 | K-means is often used for quantization. 15 | -------------------------------------------------------------------------------- /ai/probablistic-robotics.txt: -------------------------------------------------------------------------------- 1 | * Robotics is science of perceiving and manipulating physical world 2 | through computer controlled devices. 3 | 4 | * Robot environments are unpredictable. Sensors have limited 5 | resolution, can fail (detecting failure is hard). Actuation 6 | -------------------------------------------------------------------------------- /ai/reinforcement-learning.md: -------------------------------------------------------------------------------- 1 | Sutton and Barto. (BTW, they recommend Bertsekas as another textbook 2 | if this one is too easy; I'm probably good reading just one RL book). 3 | 4 | # Ch1: Introduction 5 | 6 | Main components are *policy*, *reward function*, *value function*, and 7 | (optionally) a *model* of the environment. 8 | 9 | They emphasize role of learning while interacting with the 10 | environment. This lets you learn as you gain new experience. This lets 11 | you learn against a player that changes strategy slowly! 12 | 13 | They mention exploration vs exploitation. They mention temporal 14 | difference learning of valuation functions and also 15 | TD-Backgammon. They mention using other ML techniques in the valuation 16 | function. 17 | 18 | They mention that you can build in knowledge of the domain into RL 19 | methods; presumably they will show how later. They mention partially 20 | observed games, but say they won't deeply cover those. 21 | 22 | They note that reinforcement learning is related to *optimal control*, 23 | which has been studied for over a century. 24 | 25 | They mention the *credit assignment problem*. 26 | 27 | # Ch2: Bandit Problems 28 | 29 | **I paused this book until later...** 30 | -------------------------------------------------------------------------------- /ai/support-vector-machines/clean-latex: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import sys 4 | sys.path.append("/Users/ruggeri/repos/ruggeri/tech-notes/lib") 5 | 6 | from make_latex import clean 7 | from pathlib import Path 8 | 9 | SOURCE_PATHS = [Path(f) for f in [ 10 | "./support-vector-machines.tex" 11 | ]] 12 | 13 | for p in SOURCE_PATHS: 14 | clean(p) 15 | -------------------------------------------------------------------------------- /ai/support-vector-machines/make-latex: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import sys 4 | sys.path.append("/Users/ruggeri/repos/ruggeri/tech-notes/lib") 5 | 6 | from make_latex import run 7 | from pathlib import Path 8 | 9 | SOURCE_PATHS = [Path(f) for f in [ 10 | "./support-vector-machines.tex" 11 | ]] 12 | 13 | run(SOURCE_PATHS) 14 | -------------------------------------------------------------------------------- /algos/dynamic-programming.md: -------------------------------------------------------------------------------- 1 | We know what a *divide-and-conquer* algorithm is. We break the problem 2 | into subproblems and solve those. Then we combine the answers from the 3 | subproblems. 4 | 5 | The subproblems sometimes *overlap*. In that case, if you solve a 6 | subproblem, you may wish to save its solution so that you can reuse it 7 | if you encounter this subproblem again. This is called *memoization*. 8 | 9 | Divide-and-conquer is not the only way to formulate subproblems. The 10 | Floyd-Warshall algorithm is a good example. Here we start with a table 11 | of shortest paths between pairs of vertices, with the restriction that 12 | we pass through no intermediate vertex. We then step-by-step relax 13 | this constraint, allowing ourself to pass through more vertices. 14 | 15 | This is not a divide-and-conquer approach. We didn't try to solve the 16 | problem with half the vertices or half the edges. Breaking up a 17 | problem like that would be *top-down*. 18 | 19 | Instead, we started from the *bottom*: problems with no intermediate 20 | vertices. We then moved *up*, putting together these solutions to 21 | solve more relaxed problems. This is called a *bottom-up approach*. 22 | 23 | An advantage to this bottom-up approach is that memoization can be 24 | wasteful in storing unnecessary solutions. In Floyd-Warshall we use 25 | exactly `O(n**2)` memory, which is optimal. 26 | 27 | The *dynamic* part of the name might be thought to come from starting 28 | with solutions to subproblems, and then, over *time*, producing new 29 | solutions to new problems, growing toward your answer. It's a tortured 30 | name (Bellman admits this), but there is some logic to it. 31 | -------------------------------------------------------------------------------- /algos/linear-merge-sort-falacy.md: -------------------------------------------------------------------------------- 1 | A note about a recurring falacy. Sometimes I here students new to 2 | asymptotic analysis say merge sorting is `O(n)`. They reason 3 | inductively. Say that it takes `O(n)` time to sort half the 4 | array. Then: 5 | 6 | 1. Sort the left (`O(n)` time) 7 | 2. Sort the right (`O(n)` time) 8 | 3. Merge the two (`O(n)` time) 9 | 10 | So this proves to sort `n` elements takes `O(n)` time! Right? 11 | 12 | Well, we know we need a base case. But what does that mean for 13 | asymptotics? 14 | 15 | What we need to show is that, for large enough `n`, there exists a `c` 16 | such that the sorting time is ` 0 7 | 8 | What this means to me is that the matrix `M` maps every vector to a 9 | new vector `Mz` such that when `Mz` is projected onto `z`, this 10 | component along `z` is positive. That is: no `z` is mapped to an 11 | orthogonal vector, nor a vector "against" the original `z`. 12 | 13 | Semi-definite allows for mapping to orthogonal vectors. 14 | 15 | Not really sure what symmetry is going to give us yet... 16 | -------------------------------------------------------------------------------- /math/cos-of-sum.JPG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ruggeri/tech-notes/bd3981d766c26105c82bc55ea05fca0dacdb1b42/math/cos-of-sum.JPG -------------------------------------------------------------------------------- /math/diffie-hellman.md: -------------------------------------------------------------------------------- 1 | I think it sounds pretty easy. You choose a prime `p`. You choose a 2 | primitive root `g` (they all are primitive roots). 3 | 4 | Alice sends Bob `g**a mod p`. Bob sends Alice `g**b mod p`. Alice can 5 | take `g**b` and raise it to the `a`th power. And Bob can take `g**a` 6 | and raise it to the `b`th power. 7 | 8 | But if the discrete logarithm problem is hard, then finding `a, b` 9 | from `g**a` or `g**b` should be hard. 10 | 11 | If you can share the secret `g**ab`, then you can use it as an 12 | encryption key. DH is a good way to do perfect forward secrecy, since 13 | DH key generation is fast. RSA generation is presumably much slower 14 | (and each party needs to do it). 15 | 16 | You can use Diffie-Hellman to do public key encryption. Basically, 17 | Alice publishes `g**a, p` as her public key. Then, when Bob wants to 18 | send a message, they pick a `g**b`, and send that in addition to their 19 | message encrypted with `g**(ab)`. I believe this public version is 20 | called **ElGamal encryption**. 21 | 22 | But it's very interesting. This is still not _asymmetric_ encryption. 23 | Which means you can't do key signing with it. But I've explained in my 24 | ZKP notes that you can do signatures based on the discrete log problem 25 | using a non-interactive versus of Schnorr identification. 26 | -------------------------------------------------------------------------------- /math/direct-sum-vs-direct-product/clean-latex: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import sys 4 | sys.path.append("/Users/ruggeri/repos/ruggeri/tech-notes/lib") 5 | 6 | from make_latex import clean 7 | from pathlib import Path 8 | 9 | SOURCE_PATHS = [Path(f) for f in [ 10 | "./direct-sum-vs-direct-product.tex" 11 | ]] 12 | 13 | for p in SOURCE_PATHS: 14 | clean(p) 15 | -------------------------------------------------------------------------------- /math/direct-sum-vs-direct-product/make-latex: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import sys 4 | sys.path.append("/Users/ruggeri/repos/ruggeri/tech-notes/lib") 5 | 6 | from make_latex import run 7 | from pathlib import Path 8 | 9 | SOURCE_PATHS = [Path(f) for f in [ 10 | "./direct-sum-vs-direct-product.tex" 11 | ]] 12 | 13 | run(SOURCE_PATHS) 14 | -------------------------------------------------------------------------------- /math/div-grad-curl/clean-latex: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import sys 4 | sys.path.append("/Users/ruggeri/repos/ruggeri/tech-notes/lib") 5 | 6 | from make_latex import clean 7 | from pathlib import Path 8 | 9 | SOURCE_PATHS = [Path(f) for f in [ 10 | "./div-grad-curl.tex" 11 | ]] 12 | 13 | for p in SOURCE_PATHS: 14 | clean(p) 15 | -------------------------------------------------------------------------------- /math/div-grad-curl/make-latex: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import sys 4 | sys.path.append("/Users/ruggeri/repos/ruggeri/tech-notes/lib") 5 | 6 | from make_latex import run 7 | from pathlib import Path 8 | 9 | SOURCE_PATHS = [Path(f) for f in [ 10 | "./div-grad-curl.tex" 11 | ]] 12 | 13 | run(SOURCE_PATHS) 14 | -------------------------------------------------------------------------------- /math/eigenvector-decomposition-2/clean-latex: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import sys 4 | sys.path.append("/Users/ruggeri/repos/ruggeri/tech-notes/lib") 5 | 6 | from make_latex import clean 7 | from pathlib import Path 8 | 9 | SOURCE_PATHS = [Path(f) for f in [ 10 | "./eigenvector-decomposition-2.tex" 11 | ]] 12 | 13 | for p in SOURCE_PATHS: 14 | clean(p) 15 | -------------------------------------------------------------------------------- /math/eigenvector-decomposition-2/make-latex: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import sys 4 | sys.path.append("/Users/ruggeri/repos/ruggeri/tech-notes/lib") 5 | 6 | from make_latex import run 7 | from pathlib import Path 8 | 9 | SOURCE_PATHS = [Path(f) for f in [ 10 | "./eigenvector-decomposition-2.tex" 11 | ]] 12 | 13 | run(SOURCE_PATHS) 14 | -------------------------------------------------------------------------------- /math/eigenvector-decomposition-3/01-abstract.tex: -------------------------------------------------------------------------------- 1 | \section{abstract} 2 | 3 | This proof tries to correct my original ``proof.'' JRN pointed out the 4 | error in my proof to me and gave me an alternative one. His proof uses 5 | the extreme value theorem for a continuous function defined on a compact 6 | domain. His proof takes a slightly different approach than my original, 7 | though. This document is an attempt to import his idea (using the 8 | extreme value theorem) to correct my original version, while staying 9 | true to the intuition/idea I had been pursuing. 10 | -------------------------------------------------------------------------------- /math/eigenvector-decomposition-3/02-proposition.tex: -------------------------------------------------------------------------------- 1 | \section{Proposition} 2 | 3 | \begin{theorem}[Finite Dimensional Spectral Theorem] 4 | For any full-rank real symmetric matrix $\mtxA$, there exists an 5 | orthogonal (aka ``rotation'') matrix $\mtxQ$ and a diagonal (aka 6 | ``stretching'') matrix $\mtxD$ such that: 7 | 8 | \begin{nedqn} 9 | \mtxA 10 | & = & 11 | \mtxQDQt 12 | \end{nedqn} 13 | 14 | Equivalently: any full-rank real symmetric matrix $\mtxA$ with 15 | dimension $n$ possesses $n$ orthogonal (unit) eigenvectors $\setof{ 16 | \vecu[1], \ldots \vecu[n] }$. That is, for suitable $\lambda_i$: 17 | 18 | \begin{nedqn} 19 | \mtxA\vecu[i] 20 | & = & 21 | \lambda_i \vecu[i] 22 | \end{nedqn} 23 | 24 | Note that if we have $\mtxQ$, we can obtain a set $\vecu[i]$ by 25 | extracting the columns. Alternatively, given $\vecu[i]$ we can 26 | produce $\mtxQ$ by writing them in as columns. 27 | \end{theorem} 28 | -------------------------------------------------------------------------------- /math/eigenvector-decomposition-3/06-the-corrected-proof-optimization-goal.tex: -------------------------------------------------------------------------------- 1 | \section{The Corrected Proof: Optimization Goal} 2 | 3 | At last, let us begin. We will cast our search for a diagonal matrix 4 | $\mtxA' \defeq \mtxQtAQ$ as an optimization problem. Define: 5 | 6 | \begin{nedqn} 7 | f(\mtxA') 8 | & = & 9 | \sum_{i \ne j} \left(A_{i, j}\right)^2 10 | \end{nedqn} 11 | 12 | That is, $f(\mtxA')$ is the sum of squares of off-diagonal elements. Of 13 | course, $f(\mtxA') = 0$ iff $\mtxA'$ is diagonal. So $f$ is a measure of 14 | ``badness'' that I want to minimize. To show that $\mtxA$ is always 15 | diagonalizable is to show that $f$ achieves a minimum of exactly zero on 16 | the space of rotated representations $\mtxA'$. 17 | 18 | Instead of writing $f$ as a function of $\mtxA'$, we could write $f$ as 19 | a function of $\mtxQ$. Or even less redundantly, as a function of 20 | $\frac{n(n-1)}{2}$ angles $\theta_{i, j}, \forall i \ne j$. 21 | 22 | This last view helps: the space of $\theta_{i, j}$ is really just points 23 | on the surface of a unit hypersphere, which is a compact set. (TODO: JRN 24 | informs me this intuition is not quite correct.) We can apply the 25 | extreme value theorem since $f$ is continuous. That is: $f$ achieves a 26 | minimum somewhere. 27 | 28 | The only question remains: is $\min_{\mtxA'} f(\mtxA') = 0$? 29 | -------------------------------------------------------------------------------- /math/eigenvector-decomposition-3/clean-latex: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import sys 4 | sys.path.append("/Users/ruggeri/repos/ruggeri/tech-notes/lib") 5 | 6 | from make_latex import clean 7 | from pathlib import Path 8 | 9 | SOURCE_PATHS = [Path(f) for f in [ 10 | "./eigenvector-decomposition-3.tex" 11 | ]] 12 | 13 | for p in SOURCE_PATHS: 14 | clean(p) 15 | -------------------------------------------------------------------------------- /math/eigenvector-decomposition-3/eigenvector-decomposition-3.tex: -------------------------------------------------------------------------------- 1 | \documentclass[11pt, oneside]{amsart} 2 | 3 | \usepackage{geometry} 4 | \geometry{letterpaper} 5 | 6 | \usepackage{ned-common} 7 | \usepackage{ned-calculus} 8 | \usepackage{ned-linear-algebra} 9 | 10 | \begin{document} 11 | 12 | \title{Every Real Symmetric Matrix Is Diagonalizable} 13 | \maketitle 14 | 15 | \input{01-abstract} 16 | \input{02-proposition} 17 | \input{03-original-attempt} 18 | \input{04-contrast-with-gaussian-elimination} 19 | \input{05-exact-algorithm-for-eigenvectors} 20 | \input{06-the-corrected-proof-optimization-goal} 21 | \input{07-how-rotations-change-a} 22 | \input{08-first-order-conditions} 23 | \input{09-calculation-of-derivative-for-cross-terms} 24 | \input{10-summary-thus-far} 25 | \input{11-second-order-conditions} 26 | \input{12-conclusion} 27 | 28 | \end{document} 29 | -------------------------------------------------------------------------------- /math/eigenvector-decomposition-3/make-latex: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import sys 4 | sys.path.append("/Users/ruggeri/repos/ruggeri/tech-notes/lib") 5 | 6 | from make_latex import run 7 | from pathlib import Path 8 | 9 | SOURCE_PATHS = [Path(f) for f in [ 10 | "./eigenvector-decomposition-3.tex" 11 | ]] 12 | 13 | run(SOURCE_PATHS) 14 | -------------------------------------------------------------------------------- /math/eigenvector-decomposition-intuition/clean-latex: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import sys 4 | sys.path.append("/Users/ruggeri/repos/ruggeri/tech-notes/lib") 5 | 6 | from make_latex import clean 7 | from pathlib import Path 8 | 9 | SOURCE_PATHS = [Path(f) for f in [ 10 | "./eigenvector-decomposition-intuition.tex" 11 | ]] 12 | 13 | for p in SOURCE_PATHS: 14 | clean(p) 15 | -------------------------------------------------------------------------------- /math/eigenvector-decomposition-intuition/make-latex: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import sys 4 | sys.path.append("/Users/ruggeri/repos/ruggeri/tech-notes/lib") 5 | 6 | from make_latex import run 7 | from pathlib import Path 8 | 9 | SOURCE_PATHS = [Path(f) for f in [ 10 | "./eigenvector-decomposition-intuition.tex" 11 | ]] 12 | 13 | run(SOURCE_PATHS) 14 | -------------------------------------------------------------------------------- /math/fourier-tex/02-purpose.tex: -------------------------------------------------------------------------------- 1 | \section{Purpose} 2 | 3 | Many phenomena can be described using sinusoids. For instance, when a 4 | note on a guitar is plucked, the pressure on your eardrum due to the 5 | vibration of the string varies through time proportionally to a 6 | sinusoidal function of a given frequency. For instance, the ``concert 7 | A'' pitch is one which vibrates at a frequency of 440 times per second. 8 | 9 | That means that when a violinist plays an open A string, the sound 10 | causes an oscillating change in pressure on your eardrum. 440 11 | oscilations happen every second. 12 | 13 | How great is the peak change in pressure? This is called the 14 | \define{amplitude}, which we will denote $a$. The magnitude of $a$ will 15 | correspond to the \emph{volume} you hear. The change in pressure on your 16 | eardrum from a concert A is thus $a \sin\parens{2\pi \cdot 440 \cdot 17 | t}$. 18 | 19 | What if \emph{multiple} strings are plucked simultaneously? Say each of 20 | $k$ strings has frequency $f_i$ and is plucked with amplitude $a_i$. 21 | Then the \emph{aggregate} change in your eardrum pressure is denoted 22 | $f$. And we know: 23 | 24 | \begin{nedqn} 25 | f(t) 26 | \eqcol 27 | \sum_{i = 1}^k 28 | a_i \sin\parens{2\pi f_i t} 29 | \end{nedqn} 30 | 31 | Experimentally, we could measure $f$ with a pressure sensor, and record 32 | our readings. But from the measurements of $f$, how could we find out 33 | which strings were plucked (and at what amplitude)? Basically, we want 34 | to \emph{decompose} $f$ into the constituent sinusoidal waves, but how 35 | do we do that? The answer is the \define{Fourier transform}. 36 | -------------------------------------------------------------------------------- /math/fourier-tex/10-mass-vs-density.tex: -------------------------------------------------------------------------------- 1 | \section{Mass vs Density} 2 | 3 | As we've just been saying, all the basis vectors have infinite $L^2$ 4 | norm. Thus all functions that are spanned by the Fourier basis must also 5 | have infinite $L^2$ norm. 6 | 7 | But what about aperiodic functions with \emph{finite} norm? It's obvious 8 | that using our current version of the Fourier transform, we would assign 9 | zero weight to every frequency $\omega$. This follows because our inner 10 | product has a denominator that is ever increasing, versus a numerator 11 | that has a finite upper bound. 12 | 13 | Up to now, we've really been talking about \emph{mass}: how much mass to 14 | assign each frequency. Instead, we could talk about \emph{density}. We 15 | could say that indeed there is zero mass contributed by any one 16 | frequency $\omega$. But we could also say that \emph{intervals} of 17 | frequency can contibute mass in aggregate. Here we're talking density. 18 | 19 | Here's how we can use the density: 20 | 21 | \begin{nedqn} 22 | f(t) 23 | \eqcol 24 | \int_{-\infty}^{\infty} 25 | \hat{f}(\omega) 26 | \expf{i\omega t} 27 | \domega 28 | \end{nedqn} 29 | 30 | We could speak even more generally still. We could do a Lebesgue 31 | integration relative to any measure on the real space of $\omega$ 32 | values. That would allow us to put mass in some places (via Dirac 33 | deltas), but have density elsewhere. 34 | -------------------------------------------------------------------------------- /math/fourier-tex/clean-latex: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import sys 4 | sys.path.append("/Users/ruggeri/repos/ruggeri/tech-notes/lib") 5 | 6 | from make_latex import clean 7 | from pathlib import Path 8 | 9 | SOURCE_PATHS = [Path(f) for f in [ 10 | "./fourier.tex" 11 | ]] 12 | 13 | for p in SOURCE_PATHS: 14 | clean(p) 15 | -------------------------------------------------------------------------------- /math/fourier-tex/fourier.tex: -------------------------------------------------------------------------------- 1 | \documentclass[11pt, oneside]{amsart} 2 | 3 | \usepackage{geometry} 4 | \geometry{letterpaper} 5 | 6 | \usepackage{ned-common} 7 | \usepackage{ned-calculus} 8 | \usepackage{ned-linear-algebra} 9 | \usepackage{ned-stats} 10 | 11 | \begin{document} 12 | 13 | \title{The Fourier Transform} 14 | \maketitle 15 | 16 | \input{01-definitions} 17 | \input{02-purpose} 18 | \input{03-linear-algebra-review} 19 | \input{04-relationship-to-linear-algebra} 20 | \input{05-our-inner-product} 21 | \input{06-adding-cosines-in} 22 | \input{07-complex-sinusoidals} 23 | \input{08-topology} 24 | \input{09-aperiodic-functions} 25 | \input{10-mass-vs-density} 26 | \input{11-fourier-densities} 27 | 28 | % * https://see.stanford.edu/materials/lsoftaee261/book-fall-07.pdf 29 | % * I also really liked "Fourier Analysis" by Gerald B. Folland. 30 | 31 | % In Folland, there is a very readable treatment of when the Fourier 32 | % seriers and Fourier transforms exist and converge back to the original 33 | % function. It involves a bunch of Lebesgue integration theorems (some 34 | % of which I wrote up in the calculus folder). 35 | 36 | % http://math.mit.edu/~gs/cse/websections/cse41.pdf 37 | 38 | % I think there is more that I want to say about topology... 39 | 40 | \end{document} 41 | -------------------------------------------------------------------------------- /math/fourier-tex/make-latex: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import sys 4 | sys.path.append("/Users/ruggeri/repos/ruggeri/tech-notes/lib") 5 | 6 | from make_latex import run 7 | from pathlib import Path 8 | 9 | SOURCE_PATHS = [Path(f) for f in [ 10 | "./fourier.tex" 11 | ]] 12 | 13 | run(SOURCE_PATHS) 14 | -------------------------------------------------------------------------------- /math/hesse-form-of-line.md: -------------------------------------------------------------------------------- 1 | Any line can be described as: 2 | 3 | r \dot v = d 4 | 5 | What does this mean? Consider a plane in three-dimensional space. The 6 | contours of the plane are lines perpindicular to the 7 | gradient. Therefore, any line can be expressed as the points 8 | perpindicular to a gradient (which is `r` in this case). If `d=0` then 9 | this line passes through the origin, otherwise we push it out by 10 | `d/norm(r)`. 11 | 12 | This immediately shows that what matters is the ratio 13 | `d/norm(r)`. Therefore, let us assume `norm(r)=1`. In that case, `r = 14 | (cos theta, sin theta)` for some theta. This gives the form: 15 | 16 | v_1 cos(theta) + v_2 sin(theta) = d 17 | 18 | This form of the line is called *Hesse normal form*. 19 | 20 | We now consider all lines passing through a given point `x,y`, that 21 | is, the set of points `(theta, d)` where the above equality is 22 | satisfied. Let's think of `d` as a function of `theta`. Then `d` is 23 | equal to 24 | 25 | v_1 cos(theta) + v_2 sin(theta) 26 | 27 | This is a linear combination of `cos` and `sin`. Elsewhere in my math 28 | notes, I show that this is also a sinusoidal function. In fact, it has 29 | period `2pi`. 30 | -------------------------------------------------------------------------------- /math/kalman-filter/02-definition-of-normal.tex: -------------------------------------------------------------------------------- 1 | \section{Definition of Normal Distribution} 2 | 3 | The \define{normal distribution} with mean $\mu$ and variance $\var$ is 4 | defined as: 5 | 6 | \begin{nedqn} 7 | \nnormal 8 | & \defeq & 9 | \nnormaleq 10 | \\ 11 | % 12 | \shortintertext{And the \define{standard normal} is simply:} 13 | % 14 | \snormal 15 | & \defeq & 16 | \snormaleq 17 | \end{nedqn} 18 | 19 | We ought to show this alleged probability distribution integrates to 1. 20 | Let's do a change of variables $x \mapsto \parens{\sqrt{2}\sigma y + 21 | \mu}$: 22 | 23 | \begin{nedqn} 24 | \intR 25 | \nnormaleq 26 | & = & 27 | \sqrttwo\sigma 28 | % 29 | \intR 30 | \nnormalc 31 | % 32 | \nexpf{ 33 | \Bparens{ 34 | \parens{\sqrt{2}\sigma y + \mu} 35 | - 36 | \mu 37 | }^2 38 | }{2\var} 39 | % 40 | \dy 41 | \\ 42 | & = & 43 | \invsqrtpi 44 | \intR 45 | \gaussianexp[y] 46 | \dy 47 | \\ 48 | & = & 49 | \invsqrtpi 50 | \sqrtpi 51 | \nedcomment{Gaussian integral} 52 | \\ 53 | & = & 54 | 1 55 | \end{nedqn} 56 | 57 | Notice that factor of $\sqrttwo$ that got introduced when we did the 58 | change of variables. That's because $\sqrttwo \dy = \dx$. 59 | 60 | Anyway, this shows that $\nnormal$ is a proper probability density 61 | function. 62 | -------------------------------------------------------------------------------- /math/kalman-filter/06-mle-estimate-for-normal-variance.tex: -------------------------------------------------------------------------------- 1 | \section{Maximum Likelihood Estimate for $\var$} 2 | -------------------------------------------------------------------------------- /math/kalman-filter/07-the-posterior-distribution-of-mu.tex: -------------------------------------------------------------------------------- 1 | \section{The Posterior Distribution of $\mu$} 2 | 3 | \input{07a-definitions.tex} 4 | \input{07b-choosing-our-prior.tex} 5 | \input{07c-first-exercise.tex} 6 | \input{07d-second-exercise.tex} 7 | \input{07e-calculation-of-mumuxp.tex} 8 | \input{07f-calculation-of-varmuxp.tex} 9 | \input{07g-posterior-wrapup.tex} 10 | -------------------------------------------------------------------------------- /math/kalman-filter/07a-definitions.tex: -------------------------------------------------------------------------------- 1 | \subsection{Definitions} 2 | 3 | We have found the maximum likelihood estimate for $\mux$, but how likely 4 | is it? Are there other values for $\mux$ that are nearly as good, but 5 | far away from the MLE estimate? 6 | 7 | Let's explore the \define{posterior} distribution on $\mux$ given a 8 | sample $x$. That is, we want to know: 9 | 10 | \begin{nedqn} 11 | \cpmuxx 12 | \end{nedqn} 13 | 14 | This only makes sense if there is a \emph{joint} probability 15 | distribution over $\mux$ and $x$. We already know $\cpxmux$. What we are 16 | missing is the unconditional \define{prior} distribution $\pmux$. Once 17 | we have $\pmux$, we can then use Bayes' rule: 18 | 19 | \begin{nedqn} 20 | \pr{\mux, x} 21 | & = & 22 | \pr{\mux, x} 23 | \\ 24 | \px 25 | \cpmuxx 26 | & = & 27 | \pmux 28 | \cpxmux 29 | \\ 30 | \cpmuxx 31 | & = & 32 | \frac{ 33 | \pmux \cpxmux 34 | }{ 35 | \px 36 | } 37 | \end{nedqn} 38 | 39 | It is most convenient if the posterior distribution is of the same 40 | family as the prior distribution. We're going to work out a technique to 41 | update $\pmux$ to $\cpmuxx$ given a single datapoint. If the posterior 42 | $\cpmuxx$ has the same ``shape'' as the prior $\pmux$, then our 43 | technique can be simply re-applied if we observe a second datapoint and 44 | want to \emph{update our priors} a second time. 45 | 46 | When $\cpmuxx$ has the same shape as $\pmux$, we say that $\pmux$ is a 47 | \define{conjugate prior} for the conditional distribution $\px$. 48 | -------------------------------------------------------------------------------- /math/kalman-filter/07c-first-exercise.tex: -------------------------------------------------------------------------------- 1 | \subsection{First Exercise} 2 | 3 | Let's get away from all these symbols and just try to simplify something 4 | easy: 5 | 6 | \begin{nedqn} 7 | \parensq{x - b_1} + \parensq{x - b_2} 8 | \end{nedqn} 9 | 10 | It's intuitive that this function is minimized at $x = \otwo{b_1 + 11 | b_2}$, because then: 12 | 13 | \begin{nedqn} 14 | x - b_1 15 | & = & 16 | \otwo{b_1 + b_2} 17 | - 18 | b_1 19 | = 20 | \otwo{b_2 - b_1} 21 | \\ 22 | x - b_2 23 | & = & 24 | \otwo{b_1 + b_2} 25 | - 26 | b_2 27 | = 28 | \otwo{b_1 - b_2} 29 | \end{nedqn} 30 | 31 | These have the same magnitude, so the competing objectives to minimize 32 | $\parensq{x - b_1}$ and $\parensq{x - b_2}$ are balanced. Of course, you 33 | may verify this yourself by taking the derivative. 34 | 35 | Once we have found the minimum, we know that 36 | 37 | \begin{nedqn} 38 | \parensq{x - b_1} 39 | + 40 | \parensq{x - b_2} 41 | & = & 42 | C 43 | \parensq{x - \otwo{b_1 + b_2}} 44 | + 45 | D 46 | \end{nedqn} 47 | 48 | \noindent 49 | for constants $C, D$. We know that $C = 2$, because the two $x^2$ in the 50 | original sum together. What is the $D$ term? It is the ``error'' when we 51 | choose the best value for $x$: $\otwo{b_1 + b_2}$. So of course it 52 | makes sense that this is $D = 2\parensq{\otwo{b_2 - b_1}}$. That's 53 | the sum of the ``errors'' from each side, and the errors are both 54 | $\parensq{\otwo{b_2 - b_1}}$. 55 | -------------------------------------------------------------------------------- /math/kalman-filter/07d-second-exercise.tex: -------------------------------------------------------------------------------- 1 | \subsection{Second Exercise} 2 | 3 | Let's do one better. Let's try to simplify: 4 | 5 | \begin{nedqn} 6 | a_1^2 \parensq{x - b_1} 7 | + 8 | a_2^2 \parensq{x - b_2} 9 | & = & 10 | \parensq{a_1 x - a_1 b_1} 11 | + 12 | \parensq{a_2 x - a_2 b_2} 13 | \end{nedqn} 14 | 15 | We can see what is happening here. Each error function has its own 16 | ``stretched'' sense of distance. To achieve balance we still need: 17 | 18 | \begin{nedqn} 19 | 2 20 | \parens{a_1 x - a_1b_1} 21 | a_1 22 | & = & 23 | -2 24 | \parens{a_2 x - a_2 b_2} 25 | a_2 26 | \end{nedqn} 27 | 28 | We can see that we want to take a weighted average to achieve the 29 | balance and minimize the error: 30 | 31 | \begin{nedqn} 32 | x 33 | & = & 34 | \frac{ 35 | a_1^2 b_1 + a_2^2 b_2 36 | }{ 37 | a_1^2 + a_2^2 38 | } 39 | \nednumber\label{weightedsum} 40 | \end{nedqn} 41 | 42 | \noindent 43 | Let us call this $b'$. 44 | 45 | As we know from last time, we have 46 | 47 | \begin{nedqn} 48 | a_1^2 \parensq{x - b_1} 49 | + 50 | a_2^2 \parensq{x - b_2} 51 | & = & 52 | C 53 | \parensq{x - b'} 54 | + 55 | D 56 | \end{nedqn} 57 | 58 | Just as before, the constant is an ``error'' term. We can ignore it. 59 | 60 | We should look at $C$. As before, it should equal $a_1^2 + a_2^2$ so 61 | that the leading coefficient of the $x^2$ term is correct. 62 | -------------------------------------------------------------------------------- /math/kalman-filter/07f-calculation-of-varmuxp.tex: -------------------------------------------------------------------------------- 1 | \subsection{Calculation of $\varmuxp$} 2 | 3 | From before, we know that $C = a_1^2 + a_2^2$. Above, when I extracted a 4 | constant of $\half$ from the exponent, I ensured that $C = \rhomuxp$. 5 | 6 | That means: 7 | 8 | \begin{nedqn} 9 | \rhomuxp 10 | & = & 11 | \rhomux 12 | + 13 | \rhox 14 | \end{nedqn} 15 | 16 | That's pretty cool. It says that the precision increases by $\rhox$. 17 | That makes sense: the more ``precise'' we think samples of $x$ are, the 18 | more information they are telling us. One interesting fact: no matter 19 | how unexpected our sample $x$ is, our posterior precision will always 20 | monotonically increase, and by the same amount. 21 | 22 | We may put this in variance terms: 23 | 24 | \begin{nedqn} 25 | \rhomuxp 26 | & = & 27 | \rhomux 28 | + 29 | \rhox 30 | \\ 31 | \invf{\varmuxp} 32 | & = & 33 | \invf{\varmux} 34 | + 35 | \invf{\varx} 36 | \\ 37 | \varmux 38 | \varx 39 | & = & 40 | \varmuxp \varx 41 | + 42 | \varmuxp \varmux 43 | \\ 44 | \varmuxp 45 | & = & 46 | \frac{ 47 | \varmux 48 | \varx 49 | }{ 50 | \varmux + \varx 51 | } 52 | \end{nedqn} 53 | 54 | It is hard for me to give a very clear interpretation of this formula. 55 | But here it is\ldots 56 | -------------------------------------------------------------------------------- /math/kalman-filter/10-kalman-filter.tex: -------------------------------------------------------------------------------- 1 | \section{The Kalman Filter} 2 | 3 | \input{10a-definition} 4 | \input{10b-prediction} 5 | \input{10c-covariance-update} 6 | \input{10d-innovation-format} 7 | \input{10e-mean-update} 8 | -------------------------------------------------------------------------------- /math/kalman-filter/10a-definition.tex: -------------------------------------------------------------------------------- 1 | \subsection{Definition} 2 | 3 | Because of all our preparation, the final work on the Kalman filter is 4 | quite simple. The Kalman filter is very similar to a hidden Markov 5 | model, except the conditional probability distributions are linear 6 | Gaussian, rather than discrete. 7 | 8 | First, we have an unobserved variable $\vz_t$. The state evolves in the 9 | following simple way: 10 | 11 | \begin{nedqn} 12 | \vz_t 13 | \eqcol 14 | \mA \vz_{t-1} 15 | + 16 | \vepsilon\subvz 17 | \\ 18 | \intertext{where} 19 | \vepsilon\subvz 20 | \simcol 21 | \normal{\veczero}{\mQ} 22 | \end{nedqn} 23 | 24 | Our insight into the unobserved state is via $\vy_t$. The observation 25 | model is: 26 | 27 | \begin{nedqn} 28 | \vy_t 29 | \eqcol 30 | \mC \vx_t 31 | + 32 | \vepsilon\subvy 33 | \intertext{where} 34 | \vepsilon\subvy 35 | \simcol 36 | \normal{\veczero}{\mR} 37 | \end{nedqn} 38 | 39 | -------------------------------------------------------------------------------- /math/kalman-filter/10b-prediction.tex: -------------------------------------------------------------------------------- 1 | \subsection{Prediction} 2 | 3 | Our goal will eventually be to calculate $\condpr{\vz_t}{\vy_{1:t}}$. 4 | Before we try to fit a posterior, let's calculate the ``prior'' 5 | distribution $\condpr{\vz_t}{\vy_{1:t-1}}$. 6 | 7 | We will work recursively. Thus we assume that we already have available: 8 | 9 | \begin{nedqn} 10 | \condpr{\vz_{t-1}}{\vy_{1:t-1}} 11 | \eqcol 12 | \normal{\vmu_{t-1}}{\covmtx_{t-1}} 13 | \end{nedqn} 14 | 15 | Likewise, the definition of the transition model is: 16 | 17 | \begin{nedqn} 18 | \condpr{\vz_t}{\vz_{t-1}} 19 | \simcol 20 | \normal{\mA \vz_{t-1}}{\mQ}. 21 | \end{nedqn} 22 | 23 | We now combine: 24 | 25 | \begin{nedqn} 26 | \condpr{\vz_t}{\vy_{1:t-1}} 27 | \eqcol 28 | \condpr{\vz_{t-1}}{\vy_{1:t-1}} 29 | \condpr{\vz_t}{\vz_{t-1}} 30 | \\ 31 | \simcol 32 | \normal{\vmu_{t-1}}{\covmtx_{t-1}} 33 | \normal{ 34 | \mA \vmu_{t-1} 35 | }{\mQ} 36 | \\ 37 | \eqcol 38 | \normal{ 39 | \mA 40 | \vmu_{t-1} 41 | }{ 42 | \mA 43 | \covmtx_{t-1} 44 | \mAt 45 | + 46 | \mQ 47 | } 48 | \\ 49 | \eqcol 50 | \normal{ 51 | \vmu_{t | t-1} 52 | }{ 53 | \covmtx_{t | t-1} 54 | } 55 | \nednumber\label{zt:cond:ztm1} 56 | \intertext{where} 57 | \\ 58 | \vmu_{t | t - 1} 59 | \eqcol 60 | \mA 61 | \vmu_{t-1} 62 | \\ 63 | \covmtx_{t | t - 1} 64 | & \defeq & 65 | \mA 66 | \covmtx_{t-1} 67 | \mAt 68 | + 69 | \mQ 70 | \end{nedqn} 71 | 72 | Those are our prior predicted mean and covariance for the distribution 73 | of $\condpr{\vz_t}{\vy_{1:t-1}}$. 74 | -------------------------------------------------------------------------------- /math/kalman-filter/10c-covariance-update.tex: -------------------------------------------------------------------------------- 1 | \subsection{Covariance Update} 2 | 3 | We now want to incorporate a new observation: $\vy_t$. We want to 4 | calculate: 5 | 6 | \begin{nedqn} 7 | \condpr{\vz_t}{\vy\seqt} 8 | \simcol 9 | \condpr{\vz_{t - 1}}{\vy_{1:t-1}} 10 | \condpr{\vz_t}{\vz_{t-1}} 11 | \condpr{\vy_t}{\vz_t} 12 | \end{nedqn} 13 | 14 | Great! We've already analyzed the first parts above, so we just need to 15 | attack $\condpr{\vy_t}{\vz_t}$. We can almost apply our MVN posterior 16 | calculation technique, except the conditional distribution here is over 17 | $Y$ space rather than $Z$ space. But that's easy to fix: 18 | 19 | \begin{nedqn} 20 | \condpr{ 21 | \vy_t 22 | }{ 23 | \vz_t 24 | } 25 | \simcol 26 | \normal{\mC \vz_t}{\mR} 27 | \\ 28 | \condpr{ 29 | \mC\invx 30 | \vy_t 31 | }{ 32 | \vz_t 33 | } 34 | \simcol 35 | \normal{\vz_t}{\mC\invx \mR \parenstrans{\mC\invx}} 36 | \nednumber\label{yt:cond:zt} 37 | \end{nedqn} 38 | 39 | Combining equations \ref{yt:cond:zt} and \ref{zt:cond:ztm1}, we can 40 | start our normal calculations: 41 | 42 | \begin{nedqn} 43 | \covmtx_t\invx 44 | \eqcol 45 | \covmtx_{t | t-1}\invx 46 | + 47 | \parensinv{ 48 | \mCinv 49 | \mR 50 | \parenstrans{\mCinv} 51 | } 52 | \\ 53 | \eqcol 54 | \covmtx_{t | t-1}\invx 55 | + 56 | \mCt 57 | \mRinv 58 | \mC 59 | \end{nedqn} 60 | -------------------------------------------------------------------------------- /math/kalman-filter/10d-innovation-format.tex: -------------------------------------------------------------------------------- 1 | \subsection{Innovation Update} 2 | 3 | Just a quick note. If we are calculating an update for $\vmu$: 4 | 5 | \begin{nedqn} 6 | \vmu_t 7 | \eqcol 8 | \parensinv{\mXinv + \mYinv} 9 | \parens{ 10 | \mXinv 11 | \vmu_{t-1} 12 | + 13 | \mYinv 14 | \vx 15 | } 16 | \\ 17 | \intertext{then note} 18 | \coleq 19 | \parensinv{\mXinv + \mYinv} 20 | \parens{ 21 | \mXinv 22 | \vmu_{t-1} 23 | + 24 | \mYinv 25 | \parens{ 26 | \vmu_{t-1} 27 | + 28 | \Delta\vx 29 | } 30 | } 31 | \\ 32 | \coleq 33 | \parensinv{\mXinv + \mYinv} 34 | \parens{ 35 | \parens{ 36 | \mXinv 37 | + 38 | \mYinv 39 | } 40 | \vmu_{t-1} 41 | + 42 | \mYinv 43 | \Delta\vx 44 | } 45 | \\ 46 | \coleq 47 | \vmu_{t-1} 48 | + 49 | \parensinv{\mXinv + \mYinv} 50 | \mYinv 51 | \Delta\vx 52 | \\ 53 | \Delta\vmu 54 | \coleq 55 | \parensinv{\mXinv + \mYinv} 56 | \mYinv 57 | \Delta\vx 58 | \end{nedqn} 59 | 60 | We say that $\Delta\vx$ is the \define{innovation}. It's the difference 61 | from the prior predicted mean. This format makes the update 62 | ``incremental.'' 63 | -------------------------------------------------------------------------------- /math/kalman-filter/10e-mean-update.tex: -------------------------------------------------------------------------------- 1 | \subsection{Mean Calculation} 2 | 3 | Using this innovation format, we may write: 4 | 5 | \begin{nedqn} 6 | \vmu_t - \vmu_{t-1} 7 | \eqcol 8 | \parensinv{ 9 | \covmtx_{t | t-1}\invx 10 | + 11 | \mCt 12 | \mRinv 13 | \mC 14 | } 15 | \parens{ 16 | \mCt 17 | \mRinv 18 | \mC 19 | } 20 | \parens{ 21 | \mCinv 22 | \Delta\vy_t 23 | } 24 | \end{nedqn} 25 | 26 | Notice the use of $\mCinv \Delta\vy_t$ because we need to transform $Y$ 27 | space back to $Z$ space. 28 | 29 | \begin{nedqn} 30 | \vmu_t - \vmu_{t-1} 31 | \eqcol 32 | \parensinv{ 33 | \covmtx_{t | t-1}\invx 34 | + 35 | \mCt 36 | \mRinv 37 | \mC 38 | } 39 | \parens{ 40 | \mCt 41 | \mRinv 42 | \mC 43 | } 44 | \Bparens{ 45 | \mCinv 46 | \Delta\vy_t 47 | } 48 | \\ 49 | \eqcol 50 | \parensinv{ 51 | \covmtx_{t | t-1}\invx 52 | + 53 | \mCt 54 | \mRinv 55 | \mC 56 | } 57 | \Bparensinv{ 58 | \mR 59 | \parensinv{\mCt} 60 | } 61 | \Bbparens{ 62 | \Delta\vy_t 63 | } 64 | \\ 65 | \eqcol 66 | \parensinv{ 67 | \mR 68 | \parensinv{\mCt} 69 | \covmtx_{t | t-1}\invx 70 | + 71 | \mC 72 | } 73 | \Bbparens{ 74 | \Delta\vy_t 75 | } 76 | \\ 77 | \mK 78 | \defeqcol 79 | \parensinv{ 80 | \mR 81 | \parensinv{\mCt} 82 | \covmtx_{t | t-1}\invx 83 | + 84 | \mC 85 | } 86 | \end{nedqn} 87 | 88 | We call this the \define{Kalman gain matrix} and there are lots of 89 | equivalent definitions. 90 | 91 | And there you have it! 92 | -------------------------------------------------------------------------------- /math/kalman-filter/clean-latex: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import sys 4 | sys.path.append("/Users/ruggeri/repos/ruggeri/tech-notes/lib") 5 | 6 | from make_latex import clean 7 | from pathlib import Path 8 | 9 | SOURCE_PATHS = [Path(f) for f in [ 10 | "./kalman-filter.tex" 11 | ]] 12 | 13 | for p in SOURCE_PATHS: 14 | clean(p) 15 | -------------------------------------------------------------------------------- /math/kalman-filter/kalman-filter.tex: -------------------------------------------------------------------------------- 1 | \documentclass[11pt, oneside]{amsart} 2 | 3 | \usepackage{geometry} 4 | \geometry{letterpaper} 5 | 6 | \usepackage{ned-common} 7 | \usepackage{ned-calculus} 8 | \usepackage{ned-linear-algebra} 9 | \usepackage{ned-stats} 10 | 11 | \begin{document} 12 | 13 | \title{The Normal Distribution} 14 | \maketitle 15 | 16 | \input{01-the-gaussian-integral} 17 | \clearpage 18 | \input{02-definition-of-normal} 19 | \clearpage 20 | \input{03-mean-of-normal} 21 | \clearpage 22 | \input{04-variance-of-normal} 23 | \clearpage 24 | \input{05-mle-estimate-for-normal-mean} 25 | \clearpage 26 | \input{06-mle-estimate-for-normal-variance} 27 | \clearpage 28 | \input{07-the-posterior-distribution-of-mu} 29 | \clearpage 30 | \input{08-the-multivariate-normal-distribution} 31 | \clearpage 32 | \input{09-mvn-posterior} 33 | \clearpage 34 | \input{10-kalman-filter.tex} 35 | 36 | \end{document} 37 | -------------------------------------------------------------------------------- /math/kalman-filter/make-latex: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import sys 4 | sys.path.append("/Users/ruggeri/repos/ruggeri/tech-notes/lib") 5 | 6 | from make_latex import run 7 | from pathlib import Path 8 | 9 | SOURCE_PATHS = [Path(f) for f in [ 10 | "./kalman-filter.tex" 11 | ]] 12 | 13 | run(SOURCE_PATHS) 14 | -------------------------------------------------------------------------------- /math/lagrange/clean-latex: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import sys 4 | sys.path.append("/Users/ruggeri/repos/ruggeri/tech-notes/lib") 5 | 6 | from make_latex import clean 7 | from pathlib import Path 8 | 9 | SOURCE_PATHS = [Path(f) for f in [ 10 | "./lagrange.tex" 11 | ]] 12 | 13 | for p in SOURCE_PATHS: 14 | clean(p) 15 | -------------------------------------------------------------------------------- /math/lagrange/make-latex: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import sys 4 | sys.path.append("/Users/ruggeri/repos/ruggeri/tech-notes/lib") 5 | 6 | from make_latex import run 7 | from pathlib import Path 8 | 9 | SOURCE_PATHS = [Path(f) for f in [ 10 | "./lagrange.tex" 11 | ]] 12 | 13 | run(SOURCE_PATHS) 14 | -------------------------------------------------------------------------------- /math/ordinary-differential-equations/clean-latex: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | # Clean all latex tmp files. 4 | gfind . -regex ".*\\.\\(aux\|fdb_latexmk\|fls\|log\|gz\|out\\)$" | parallel rm 5 | -------------------------------------------------------------------------------- /math/ordinary-differential-equations/make-latex: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | TECH_NOTES_DIR="/Users/ruggeri/repos/ruggeri/tech-notes" 4 | 5 | FILES=" 6 | ordinary-differential-equations.tex 7 | " 8 | 9 | CMD="\ 10 | TEXINPUTS=\"$TECH_NOTES_DIR/math/stys:{//}:\" \ 11 | max_print_line=100000 \ 12 | pdflatex \ 13 | --file-line-error \ 14 | --synctex=1 \ 15 | --halt-on-error \ 16 | --output-directory {//} \ 17 | {.}\ 18 | " 19 | 20 | echo $FILES | \ 21 | parallel -P 1 $CMD | \ 22 | grep -v "texlive" | \ 23 | grep --context=5 "^l" 24 | 25 | ./clean-latex 26 | -------------------------------------------------------------------------------- /math/quaternions/clean-latex: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import sys 4 | sys.path.append("/Users/ruggeri/repos/ruggeri/tech-notes/lib") 5 | 6 | from make_latex import clean 7 | from pathlib import Path 8 | 9 | SOURCE_PATHS = [Path(f) for f in [ 10 | "./quaternions.tex" 11 | ]] 12 | 13 | for p in SOURCE_PATHS: 14 | clean(p) 15 | -------------------------------------------------------------------------------- /math/quaternions/make-latex: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import sys 4 | sys.path.append("/Users/ruggeri/repos/ruggeri/tech-notes/lib") 5 | 6 | from make_latex import run 7 | from pathlib import Path 8 | 9 | SOURCE_PATHS = [Path(f) for f in [ 10 | "./quaternions.tex" 11 | ]] 12 | 13 | run(SOURCE_PATHS) 14 | -------------------------------------------------------------------------------- /math/riemann-hypothesis/clean-latex: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import sys 4 | sys.path.append("/Users/ruggeri/repos/ruggeri/tech-notes/lib") 5 | 6 | from make_latex import clean 7 | from pathlib import Path 8 | 9 | SOURCE_PATHS = [Path(f) for f in [ 10 | "./riemann-hypothesis.tex" 11 | ]] 12 | 13 | for p in SOURCE_PATHS: 14 | clean(p) 15 | -------------------------------------------------------------------------------- /math/riemann-hypothesis/make-latex: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import sys 4 | sys.path.append("/Users/ruggeri/repos/ruggeri/tech-notes/lib") 5 | 6 | from make_latex import run 7 | from pathlib import Path 8 | 9 | SOURCE_PATHS = [Path(f) for f in [ 10 | "./riemann-hypothesis.tex" 11 | ]] 12 | 13 | run(SOURCE_PATHS) 14 | -------------------------------------------------------------------------------- /math/rsa/03-fermats-little-theorem.tex: -------------------------------------------------------------------------------- 1 | \section{Fermat's Little Theorem} 2 | 3 | \begin{theorem}[Fermat's Little Theorem] 4 | For any $a \ne 0$: 5 | 6 | \begin{nedqn} 7 | a^p \equiv a \pmod{p} 8 | \end{nedqn} 9 | 10 | Equivalently: 11 | 12 | \begin{nedqn} 13 | a^{p-1} \equiv 1 \pmod{p} 14 | \end{nedqn} 15 | \end{theorem} 16 | 17 | \begin{proof} 18 | Fermat's little theorem is simply a special case of Lagrange's 19 | theorem. 20 | 21 | As mentioned in the last section, we already showed that the series 22 | $1, a, a^2, \ldots$ generates a subgroup of $\Zmodpx$ (the 23 | multiplicative subgroup of $\Zmodp$). 24 | 25 | By applying Lagrange's theorem, we showed that the order of the 26 | generated subgroup divides $p - 1$ (the size of the multiplicative 27 | subgroup). 28 | 29 | But then we have that if $k \order{a} = p - 1$ (for some $k$), then: 30 | 31 | \begin{nedqn} 32 | a^{p - 1} 33 | \equivcol 34 | a^{k\order{a}} 35 | \\ 36 | \equivcol 37 | \parens{ 38 | a^{\order{a}} 39 | }^k 40 | \\ 41 | \equivcol 42 | 1^k 43 | \\ 44 | \equivcol 45 | 1 \pmod{p} 46 | \end{nedqn} 47 | 48 | Again, we note that the order of $a$ may well be less than $p - 1$ (in 49 | which case $k > 1$). Regardless, we will still have that $a^{p - 1} = 50 | 1$. 51 | \end{proof} 52 | -------------------------------------------------------------------------------- /math/rsa/05-eulers-totient-theorem.tex: -------------------------------------------------------------------------------- 1 | \section{Euler's Totient Theorem} 2 | 3 | \begin{definition} 4 | We define \define{Euler's totient function} $\vphif{n}$ to be the 5 | number of numbers less than $n$ that are relatively prime to $n$. Note 6 | that of course $\vphif{p} = p - 1$. 7 | \end{definition} 8 | 9 | \begin{proposition} 10 | $\Zmodx{n}$ consists of exactly $\vphif{n}$ integers. 11 | \end{proposition} 12 | 13 | \begin{proof} 14 | We already proved that $\Zmodx{n}$ consists of exactly those numbers 15 | coprime with $n$. By definition there are exactly $\vphif{n}$ of 16 | these. 17 | \end{proof} 18 | 19 | \begin{theorem}[Euler's Totient Theorem] For any $x$ that is relatively 20 | prime to $n$, we have: 21 | 22 | \begin{nedqn} 23 | x^{\vphif{n}} 24 | \equivcol 25 | 1 \pmod{n} 26 | \end{nedqn} 27 | \end{theorem} 28 | 29 | \begin{remark} 30 | Note that Euler's totient theorem generalizes Fermat's little theorem. 31 | They are both corollarys of Lagrange's theorem. 32 | \end{remark} 33 | 34 | \begin{proof} 35 | The theorem is another trivial corrollary of Lagrange's theorem. Since 36 | $\order{\Zmodx{n}}$ is $\vphif{n}$, the order of any cyclic subgroup 37 | generated by $x$ must divide $\vphif{n}$. This precisely mimics what 38 | we did for Fermat's little theorem in $\Zmodx{p}$. 39 | \end{proof} 40 | -------------------------------------------------------------------------------- /math/rsa/clean-latex: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import sys 4 | sys.path.append("/Users/ruggeri/repos/ruggeri/tech-notes/lib") 5 | 6 | from make_latex import clean 7 | from pathlib import Path 8 | 9 | SOURCE_PATHS = [Path(f) for f in [ 10 | "./rsa.tex" 11 | ]] 12 | 13 | for p in SOURCE_PATHS: 14 | clean(p) 15 | -------------------------------------------------------------------------------- /math/rsa/make-latex: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import sys 4 | sys.path.append("/Users/ruggeri/repos/ruggeri/tech-notes/lib") 5 | 6 | from make_latex import run 7 | from pathlib import Path 8 | 9 | SOURCE_PATHS = [Path(f) for f in [ 10 | "./rsa.tex" 11 | ]] 12 | 13 | run(SOURCE_PATHS) 14 | -------------------------------------------------------------------------------- /math/rsa/rsa.tex: -------------------------------------------------------------------------------- 1 | \documentclass[11pt, oneside]{amsart} 2 | 3 | \usepackage{geometry} 4 | \geometry{letterpaper} 5 | 6 | \usepackage{ned-common} 7 | \usepackage{ned-abstract-algebra} 8 | 9 | \begin{document} 10 | 11 | \title{RSA} 12 | \maketitle 13 | 14 | \input{01-lagranges-theorem} 15 | \clearpage 16 | \input{02-facts-of-z-mod-p} 17 | \clearpage 18 | \input{03-fermats-little-theorem} 19 | \clearpage 20 | \input{04-facts-of-z-mod-n} 21 | \clearpage 22 | \input{05-eulers-totient-theorem} 23 | \clearpage 24 | \input{06-rsa} 25 | \clearpage 26 | \input{07-euclidean-algorithm} 27 | \clearpage 28 | \input{08-eulers-totient-formula} 29 | \clearpage 30 | \input{09-factoring-the-cyclic-group-zmodn} 31 | \clearpage 32 | \input{10-factoring-the-ring-zmodn} 33 | \clearpage 34 | \input{11-factoring-zmodnx} 35 | \clearpage 36 | \input{12-zmodpx-is-cyclic} 37 | \clearpage 38 | % TODO: show that \Zmodx{p^k} is cyclic! 39 | \input{13-carmichaels-totient-function} 40 | 41 | \end{document} 42 | -------------------------------------------------------------------------------- /math/rsa/xx-alternative-fermat-proofs.md: -------------------------------------------------------------------------------- 1 | I think the proof of Lagrange's theorem is plenty intuitive. But here 2 | are some attempts at even "more" intuitive proofs of Fermat's Little 3 | Theorem. 4 | 5 | ## Binomial Theorem Proof 6 | 7 | I read this on Wolfram. It is inductive. Note the statement is true 8 | for `a = 1` of course. 9 | 10 | Use the Binomial Theorem: 11 | 12 | (a + 1)^p = (p choose p) a^p + (p choose p-1) a^{p-1} + ... 13 | (p choose 1) a + (p choose 0) 1 14 | 15 | Note all the coefficients have the form: 16 | 17 | p! / (k! (p-k)!) 18 | 19 | For any `k` st `0, , ...)`; that 16 | is, we project `e_1` onto each column of `A`. Of course, this is just 17 | the first row of `A`. 18 | 19 | By definition of how a linear transformation is represented, we write 20 | this vector of projects as the first column. Thus, the first row has 21 | become the first column. 22 | 23 | This demonstrates that any orthogonal matrix (which represents 24 | rotations + flips) always has its inverse equal to its transpose! Thus 25 | we write: 26 | 27 | Q\inv=Q\trans 28 | 29 | Since, `Q` is often used for an orthogonal matrix. 30 | -------------------------------------------------------------------------------- /math/strang-algebra-md/02b-lu-decomposition.md: -------------------------------------------------------------------------------- 1 | Performing "half" of this elimination can be seen to build the `LU` 2 | decomposition. Here, what we do is we start with `IA`. Then, when we 3 | add a row, we do the row operation on `A`, but the opposite operation 4 | on `I`. Here, we *only* try to eliminate in `A` below the 5 | diagonal. This results in transforming `A` to an upper triangular 6 | matrix, while `I` becomes lower triangular. 7 | 8 | Note that `A=LU` has `L` with 1s along the diagonal, while `U` has 9 | non-one diagonal. Sometimes we therefore factor to `LDU`. `L` is the 10 | same as before, but rows of `U` are scaled so that diagonal is 11 | one. The scaling is performed by `D`, which is a diagonal matrix with 12 | just the scaling values. 13 | 14 | Notice that because we may have needed to do pivots, in order to 15 | decompose any matrix we may have to decompose to `PA=LU` or `PA=LDU`. 16 | -------------------------------------------------------------------------------- /math/strang-algebra-md/02c-geometric-intuition-of-lu-decomposition.md: -------------------------------------------------------------------------------- 1 | Geometrically, what we are doing is this. We are saying: I want `e_1` 2 | to only impact the first coordinate of `b`; I want to eliminate its 3 | effect on the other coordinates. Then we say: good, now I want e_2 to 4 | only involve itself with the second coordinate in the image space. 5 | 6 | What is the idea of such a transformation? Well, it is this: to write 7 | a matrix as a pair of matrices, the first of which sends 8 | 9 | e_i -> Sum_{j<=i} e_j 10 | 11 | and the second which sends 12 | 13 | e_i -> Sum_{j>=} e_j 14 | -------------------------------------------------------------------------------- /math/strang-algebra-md/02d-computing-elimination.md: -------------------------------------------------------------------------------- 1 | What is the speed of Gaussian elimination? Well, for each row, we need 2 | to subtract it from every other row. This is `n(n-1)` row 3 | operations. Each involves `n` elements. So we're talking `O(n**3)` 4 | time. 5 | 6 | This means that to 10x the dimension of the matrix, we 1,000x the time 7 | to solve. But note that most matrices are sparse, so there are far 8 | fewer operations to perform. 9 | 10 | Another calculation note: sometimes Matlab will do row-exchanges for 11 | the sake of numerical stability, since if a pivot is small, it can be 12 | bad to scale by this. 13 | -------------------------------------------------------------------------------- /math/strang-algebra-md/02e-elementary-matrices.md: -------------------------------------------------------------------------------- 1 | **Elementary Matrices** 2 | 3 | All *invertible* linear transformations can be decomposed into a 4 | product of *elementary matrices*. These are: 5 | 6 | 1. Matrix that swaps two coordinates. 7 | 2. Matrix that scales a coordinate. 8 | 3. Matrix that adds a coordinate to another. This is a *shear 9 | mapping*. 10 | 11 | The first kind changes the sign of the determinant, the second changes 12 | the magnitude of the determinant, and the third has no impact. 13 | 14 | If we add in a fourth kind of elementary matrix, which zeros out a 15 | coordinate, then I do believe this generates all matrices. 16 | 17 | **Row-Operations vs Column-Operations** 18 | 19 | When we perform a row operation, we undo this by multiply by an 20 | elemtary matrix on the left side. This is a form of decomposition of 21 | the matrix. 22 | 23 | If we are doing Gaussian elimination, our operations are always to add 24 | a scalar multiple of a row `i` to another row `j>i`. The row operation 25 | matrix that performs this has a single off-diagonal entry, which is at 26 | position `(j, i)`. What this says is: add this much of the `i`th 27 | coordinate back to the `j`th coordinate, which will make up for our 28 | removal. 29 | 30 | Since `j>i`, this is a lower triangular matrix. 31 | 32 | We could also do the same thing but by doing column operations. If we 33 | subtracted a column `i` from a later column `j`, we need a matrix 34 | which has an entry *above* the diagonal, and we need to apply this 35 | column operation *before* the reduced matrix. 36 | -------------------------------------------------------------------------------- /math/strang-algebra-md/02f-transpose.md: -------------------------------------------------------------------------------- 1 | **Transposes** 2 | 3 | So let's talk about transposes. I say that `A\trans x` maps `x` to its 4 | projection on each of the columns of `A`. That's like almost literally 5 | the definition. 6 | 7 | Also note: 8 | 9 | Ax = (x\transA\trans)\trans 10 | 11 | This is maybe obvious. 12 | 13 | We already showed that for any orthonormal matrix, the inverse is 14 | equal to its transpose. For a matrix with skew, the transpose 15 | `A\trans` does not properly invert `x`. 16 | 17 | We say a matrix is **symmetric** if it is equal to its own transpose. 18 | 19 | `RR\trans` 20 | 21 | For **permutation matrices**, the transpose is also the inverse. (This 22 | is in part because premutation matrices are rotation matrices, albeit 23 | the rotation might have an imaginary component). 24 | 25 | **TODO**: I have no idea what symmetry means. 26 | -------------------------------------------------------------------------------- /math/strang-algebra-md/03a-column-and-null-spaces.md: -------------------------------------------------------------------------------- 1 | ## Ch3: Vector Spaces and Subspaces 2 | 3 | Vector space is a space closed under linear combinations. 4 | 5 | For a linear transformation, the *column space* defines all possible 6 | values of `Ax`. It is of course a vector space. Naturally, we can only 7 | solve `Ax=b` exactly when `b` lies in the column space. We can 8 | calculate the `span` of the columns, which is exactly the smallest 9 | vector space containing those columns. 10 | 11 | A matrix also has a *null space*; these are vectors mapped to the zero 12 | vector under the transformation. The zero vector is always in the null 13 | space, but the null space can contain more vectors. Note that the null 14 | space is itself subspace. If the null space has positive dimension, 15 | then we say the matrix is *singular*. 16 | 17 | Note: *dimension* is the size of the minimum size of a set of *basis 18 | vectors* that spans a space. All minimal sets of basis vectors have 19 | the same number of vectors, so dimension is well-defined. (Too lazy to 20 | prove this). 21 | 22 | A random matrix almost surely has *independent* columns; no column can 23 | be written in terms of the others. This is exactly when the nullspace 24 | has dimension zero. 25 | -------------------------------------------------------------------------------- /math/strang-algebra-md/03c-full-inversion.md: -------------------------------------------------------------------------------- 1 | Okay. So we saw previously how to invert a matrix. The way we did this 2 | was like so: 3 | 4 | 0. Decompose into `LU`, but be computing `L\inv` by performing the 5 | inverse operations to `I` 6 | 0. Now you have want to turn `U` into `I` step by step, performing the 7 | opposite action on `L\inv`. 8 | 0. That is the same as trying to get to reduced row-ecehlon form. 9 | 0. If `A` was invertible, then you have built `A\inv`. But if `A` was 10 | singular, you weren't quite able to turn `U` into `I`. You turned 11 | it into something that isn't of full-rank. 12 | 0. What we can do is apply the matrix we've built up to a vector 13 | `y`. If the result lies in the columnspace of `R` (the row reduced 14 | echelon form), then this is the inverse image. If not, then `y` is 15 | outside the columnspace of `A`. 16 | 0. Note that since `R` makes calculation of the nullspace simple. The 17 | subspace of solutions has rank equal to the rank of the nullspace. 18 | -------------------------------------------------------------------------------- /math/strang-algebra-md/03d-more-vector-spaces.md: -------------------------------------------------------------------------------- 1 | Defines what it means for a set to be linearly independent. Shows that 2 | the columns are linearly independent exactly when the rank is full. 3 | 4 | Talks about the row space is the column-space of `A\trans`. Duh. And I 5 | know that means exactly that the row-space is perpindicular to the 6 | nullspace. 7 | 8 | We can talk about a basis for a matrix space. 9 | -------------------------------------------------------------------------------- /math/strang-algebra-md/04a-orthogonal-subspaces.md: -------------------------------------------------------------------------------- 1 | As mentioned, row space is perpindicular to the nullspace. Likewise, 2 | the nullspace of `A\trans` is perpindicular to the columnspace. We say 3 | that these pairs are "orthogonal complements", in that they each 4 | contain all the vectors perpindicular to the other. 5 | 6 | I must have noted this elsewhere, but since the transformation on the 7 | row-space to the columnspace is one-to-one, then the rank of the row 8 | and column spaces are equal. 9 | 10 | He mentions, the transformation from the rowspace to the columnspace 11 | is *always* invertible. Basically, if you leave out the nullspace. 12 | -------------------------------------------------------------------------------- /math/strang-algebra-md/04b-simplest-projections.md: -------------------------------------------------------------------------------- 1 | We'll start talking about *projection matrices*. This projects a point 2 | into a subspace spanned by some vectors `a_i`. 3 | 4 | The simplest projection matrices project onto a single vector `a`. To 5 | do this, we know that `a\trans x` just takes the inner product of `a` 6 | and `x`, which is how much we want to scale `a` to get the "closest" 7 | point to `x` along `a`. 8 | 9 | Let's take a quick note. We know that `<(x-), a>=0`. That is, the 10 | error is perpindicular to `a`. Note that this will *always* be part of 11 | the error of `x - ra`, for any scalar `r`. So all we can do is 12 | eliminate the component of the error along `a`, which minimizes the 13 | distance to `x`. 14 | 15 | Okay. So there needs to be a matrix for this projection operation, 16 | because `Proj_a(u+v)=Proj_a(u)+Proj_a(v)`. I don't prove 17 | this. Instead, I will just show you the matrix: it is `a a\trans`. That 18 | describes exactly the operations I've described above. 19 | -------------------------------------------------------------------------------- /math/strang-algebra-md/04e-projections-are-symmetric.md: -------------------------------------------------------------------------------- 1 | Now, `uu\trans` can very easily be verified as symmetric. Since 2 | `AA\trans` is just the sum of symmetric matrices, then we know this is 3 | symmetric as well. 4 | 5 | This can also be proven like so: the transpose of `AA\trans` is 6 | `AA\trans`. That shows the projection is symmetric. 7 | 8 | **TODO** 9 | 10 | What is an intuitive reason for symmetry of projections? What does 11 | symmetry mean? Not all symmetric matrices can be projections, right? 12 | 13 | Most, why is it worth noting that the matrix is symmetric? How can 14 | that connect to other parts of algebra to give a clearer picture? 15 | -------------------------------------------------------------------------------- /math/strang-algebra-md/04g-least-squares-approximations.md: -------------------------------------------------------------------------------- 1 | So you want to fit a linear model, but you have too many datapoints 2 | and not enough parameters. 3 | 4 | Take your entire vector of target variables `y`, and view this as a 5 | vector. We want to find the `\theta` such that `X\theta` minimizes the 6 | squared loss. 7 | 8 | (I'm treating the `y` as a column, and `X`'s rows are the predictors 9 | for an example). 10 | 11 | So you can see `y\hat` as a function of `\theta`: the function treats 12 | the datapoints as fixed, and theta as the input. Because there are 13 | fewer parameters than datapoints, this is an embeding of a lower 14 | dimensionality space into a higher dimensionality space. 15 | 16 | This is exactly the problem we've seen before. We can simply find: 17 | 18 | theta\hat = (X\trans X)\inv X\trans y 19 | y\hat = X theta\hat 20 | 21 | In thw next section I will try to give more intuition for this 22 | result. However, this result is very clear from linear algebra. 23 | -------------------------------------------------------------------------------- /math/strang-algebra-md/04i-gram-schmidt.md: -------------------------------------------------------------------------------- 1 | He discusses that an orthogonal matrix is its own inverse. 2 | 3 | He notes that because orthogonal matrices don't change lengths, they 4 | can be numerically helpful. 5 | 6 | He describes Gram-Schmidt, which just successively subtracts out 7 | projections onto the previously considered vectors. You of course can 8 | collect these in a matrix R to put them back in. 9 | 10 | The QR decomposition gives us a way to solve least squares. In that 11 | case: 12 | 13 | (A\trans A)\inv = ((QR)\trans QR)\inv = (R\trans Q\trans Q R)\inv 14 | = (R\trans R)\inv 15 | 16 | So: 17 | 18 | (A\trans A)\inv A\trans y = (R\trans R)\inv (QR)\trans) y 19 | = R\inv R\trans\inv R\trans Q\trans y = R\inv Q\trans y 20 | 21 | If you already have `A` in `QR` form this should take `n**2` 22 | time. It's `n**2` to apply `Q\trans y`, and it's easy to invert an 23 | upper triangular matrix. 24 | -------------------------------------------------------------------------------- /math/strang-algebra-md/06b-differential-equations.md: -------------------------------------------------------------------------------- 1 | **TODO**: THis corresponds to section 6.3 which is titled Applications 2 | to Differential Equations. It starts at page 312. I think I can skip 3 | this section at this time, since I haven't had a need for this 4 | knowledge in practice previously. 5 | 6 | I can always return later to this. 7 | -------------------------------------------------------------------------------- /math/strang-algebra/01b-linear-transformations.tex: -------------------------------------------------------------------------------- 1 | \section{Linear Transformations/Matrices} 2 | 3 | \define{Matrices} map linear combinations to linear combinations. Such a 4 | map is called a \define{linear transformation}. The first column is what 5 | the first basis vector maps to, the second basis vector maps to the 6 | second column, etc. 7 | 8 | The product of a matrix with a vector is the result of applying the 9 | transformation to the vector. I like to see this as a weighted sum of 10 | the columns. 11 | 12 | We can \define{compose} transformations. This involves matrix 13 | multiplication. It is quite simple. Given $\mB\mA$, take the first 14 | column of $\mA$; that is what $\ve_1$ maps to under $\mA$. Then apply 15 | $\mB$ to this column. This is now equal to what ought to be the first 16 | column of the product $\mA\mB$. This suggestions a method of 17 | calculation. 18 | 19 | Note to self: matrix multiplication involves $\mathcal{O}(n^3)$ time. 20 | 21 | Another common way to apply a matrix to a vector is this: take the dot 22 | product of each row in the matrix with the vector. What is the intuition 23 | behind this? I suppose you could say this: the row is vector which is 24 | "most" transformed to the $i$-th basis vector. This is because you can 25 | treat a single row in the matrix as a linear functional, and the 26 | gradient defines the direction of steepest ascent. 27 | -------------------------------------------------------------------------------- /math/strang-algebra/01c-inversion-of-an-orthogonal-matrix.tex: -------------------------------------------------------------------------------- 1 | \section{Inverting A Matrix} 2 | 3 | How do we "undo" a linear transformation? That is, given a vector $\vv$ 4 | in terms of $B$, how do we find the vector $\vu$ such that $\vv=\mA\vu$? 5 | 6 | This is easy if we decompose $\vv$ into a linear combination of the 7 | columns of $\mA$. If the columns of $\mA$ are orthogonal in the geometry 8 | implied by $\mB$, we can do this by using the dot product. If, as is 9 | typical, $\mB$ is not orthogonal, then we must do more work. 10 | 11 | So let's take a moment to consider a matrix that maps the basis to 12 | another orthonormal basis. To invert an element $\vv$, we need to 13 | decompose it into columns of $\mA$. But since the columns are 14 | orthonormal, we can use the dot product just as we said. So $\ve_1$ 15 | should map to a vector which is $\parens{\iprod{\ve_1}{\mA_1}, 16 | \iprod{\ve_1}{\mA_2}, \ldots}$; that is, we project $\ve_1$ onto each 17 | column of $\mA$. Of course, this is just the first row of $\mA$. 18 | 19 | By definition of how a linear transformation is represented, we write 20 | this vector of projects as the first column. Thus, the first row has 21 | become the first column. 22 | 23 | This demonstrates that any orthogonal matrix (which represents 24 | rotations + flips) always has its inverse equal to its transpose! Thus 25 | we write: 26 | 27 | \begin{nedqn} 28 | \mQ 29 | \eqcol 30 | \mQt 31 | \end{nedqn} 32 | 33 | Since, $\mQ$ is often used for an orthogonal matrix. 34 | -------------------------------------------------------------------------------- /math/strang-algebra/02b-lu-decomposition.tex: -------------------------------------------------------------------------------- 1 | \subsection{\texorpdfstring{$\mL\mU$}{LU} Decomposition} 2 | 3 | Performing "half" of this elimination can be seen to build the $\mL\mU$ 4 | decomposition. Here, what we do is we start with $\mI\mA$. Then, when we 5 | add a row, we do the row operation on $\mA$, but the opposite operation 6 | on $\mI$. Here, we \emph{only} try to eliminate in $\mA$ below the 7 | diagonal. This results in transforming $\mA$ to an upper triangular 8 | matrix, while $\mI$ becomes lower triangular. 9 | 10 | Note that $\mA = \mL\mU$ has $\mL$ with 1s along the diagonal, while 11 | $\mU$ has non-one diagonal. Sometimes we therefore factor to 12 | $\mL\mD\mU$. $\mL$ is the same as before, but rows of $\mU$ are scaled 13 | so that diagonal is one. The scaling is performed by $\mD$, which is a 14 | diagonal matrix with just the scaling values. 15 | 16 | Notice that because we may have needed to do pivots, in order to 17 | decompose any matrix we may have to decompose to $\mP\mA = \mL\mU$ or 18 | $\mP\mA=\mL\mD\mU$. 19 | -------------------------------------------------------------------------------- /math/strang-algebra/02c-geometric-intuition-of-lu-decomposition.tex: -------------------------------------------------------------------------------- 1 | \subsection{Geometric Intutition Of \texorpdfstring{$\mL\mU$}{LU} Decomposition} 2 | 3 | Geometrically, what we are doing is this. We are saying: I want $\ve_1$ 4 | to only impact the first coordinate of $\vb$; I want to eliminate its 5 | effect on the other coordinates. Then we say: good, now I want $\ve_2$ 6 | to only involve itself with the second coordinate in the image space. 7 | 8 | What is the idea of such a transformation? Well, it is this: to write 9 | a matrix as a pair of matrices, the first of which sends 10 | 11 | \begin{nedqn} 12 | \ve_i 13 | & \mapsto & 14 | \sum_{j \leq i} \ve_j 15 | \end{nedqn} 16 | 17 | \noindent 18 | and the second which sends 19 | 20 | \begin{nedqn} 21 | \ve_i 22 | & \mapsto & 23 | \sum_{j \geq i} \ve_j 24 | \end{nedqn} 25 | -------------------------------------------------------------------------------- /math/strang-algebra/02d-computing-elimination.tex: -------------------------------------------------------------------------------- 1 | \subsection{Computing Elimination} 2 | 3 | What is the speed of Gaussian elimination? Well, for each row, we need 4 | to subtract it from every other row. This is $n(n-1)$ row operations. 5 | Each involves $n$ elements. So we're talking $\mathcal{O}(n^3)$ time. 6 | 7 | This means that to 10x the dimension of the matrix, we 1,000x the time 8 | to solve. But note that most matrices are sparse, so there are far 9 | fewer operations to perform. 10 | 11 | Another calculation note: sometimes Matlab will do row-exchanges for 12 | the sake of numerical stability, since if a pivot is small, it can be 13 | bad to scale by this. 14 | -------------------------------------------------------------------------------- /math/strang-algebra/02e-elementary-matrices.tex: -------------------------------------------------------------------------------- 1 | \subsection{Elementary Matrices} 2 | 3 | All \emph{invertible} linear transformations can be decomposed into a 4 | product of \define{elementary matrices}. These are: 5 | 6 | \begin{enumerate} 7 | \item Matrix that swaps two coordinates. 8 | \item Matrix that scales a coordinate. 9 | \item Matrix that adds a coordinate to another. This is a 10 | \define{shear mapping}. 11 | \end{enumerate} 12 | 13 | The first kind changes the sign of the determinant, the second changes 14 | the magnitude of the determinant, and the third has no impact. 15 | 16 | If we add in a fourth kind of elementary matrix, which zeros out a 17 | coordinate, then I do believe this generates all matrices. 18 | 19 | \subsection{Row-Operations vs Column-Operations} 20 | 21 | When we perform a row operation, we undo this by multiply by an 22 | elemtary matrix on the left side. This is a form of decomposition of 23 | the matrix. 24 | 25 | If we are doing Gaussian elimination, our operations are always to add a 26 | scalar multiple of a row $i$ to another row $j>i$. The row operation 27 | matrix that performs this has a single off-diagonal entry, which is at 28 | position $(j, i)$. What this says is: add this much of the $i$-th 29 | coordinate back to the $j$-th coordinate, which will make up for our 30 | removal. 31 | 32 | Since $j>i$, this is a lower triangular matrix. 33 | 34 | We could also do the same thing but by doing column operations. If we 35 | subtracted a column $i$ from a later column $j$, we need a matrix 36 | which has an entry *above* the diagonal, and we need to apply this 37 | column operation *before* the reduced matrix. 38 | -------------------------------------------------------------------------------- /math/strang-algebra/02f-transpose.tex: -------------------------------------------------------------------------------- 1 | \subsection{Transposes} 2 | 3 | So let's talk about transposes. I say that $\mAt \vx$ maps $\vx$ to its 4 | projection on each of the columns of $\mA$. That's like almost literally 5 | the definition. 6 | 7 | Also note: 8 | 9 | \begin{nedqn} 10 | \mA\vx 11 | \eqcol 12 | \parenstrans{ 13 | \vxt \mAt 14 | } 15 | \end{nedqn} 16 | 17 | This is maybe obvious. 18 | 19 | We already showed that for any orthonormal matrix, the inverse is 20 | equal to its transpose. For a matrix with skew, the transpose 21 | $\mAt$ does not properly invert $\vx$. 22 | 23 | We say a matrix is \define{symmetric} if it is equal to its own 24 | transpose. 25 | 26 | $\mR\mRt$. (Why did I write this here?) 27 | 28 | For \define{permutation matrices}, the transpose is also the inverse. 29 | (This is in part because premutation matrices are rotation matrices, 30 | albeit the rotation might have an imaginary component). 31 | 32 | \TODO{I have no idea what symmetry means.} 33 | -------------------------------------------------------------------------------- /math/strang-algebra/03a-column-and-null-spaces.tex: -------------------------------------------------------------------------------- 1 | \section{Ch3: Vector Spaces and Subspaces} 2 | 3 | Vector space is a space closed under linear combinations. 4 | 5 | For a linear transformation, the \emph{column space} defines all 6 | possible values of $\mA\vx$. It is of course a vector space. Naturally, 7 | we can only solve $\mA\vx=\vb$ exactly when $\vb$ lies in the column 8 | space. We can calculate the \define{span} of the columns, which is 9 | exactly the smallest vector space containing those columns. 10 | 11 | A matrix also has a \define{null space}; these are vectors mapped to the 12 | zero vector under the transformation. The zero vector is always in the 13 | null space, but the null space can contain more vectors. Note that the 14 | null space is itself subspace. If the null space has positive dimension, 15 | then we say the matrix is \define{singular}. 16 | 17 | Note: \define{dimension} is the size of the minimum size of a set of 18 | \define{basis vectors} that spans a space. All minimal sets of basis 19 | vectors have the same number of vectors, so dimension is well-defined. 20 | (Too lazy to prove this). 21 | 22 | A random matrix almost surely has \define{independent} columns; no 23 | column can be written in terms of the others. This is exactly when the 24 | nullspace has dimension zero. 25 | -------------------------------------------------------------------------------- /math/strang-algebra/03c-full-inversion.tex: -------------------------------------------------------------------------------- 1 | Okay. So we saw previously how to invert a matrix. The way we did this 2 | was like so: 3 | 4 | \begin{enumerate} 5 | \item Decompose into $\mLU$, but be computing $\mLinv$ by performing 6 | the inverse operations to $\mI$. 7 | 8 | \item Now you have want to turn $\mU$ into $\mI$ step by step, 9 | performing the opposite action on $\mLinv$. 10 | 11 | \item That is the same as trying to get to reduced row-ecehlon form. 12 | 13 | \item If $\mA$ was invertible, then you have built $\mAinv$. But if 14 | $\mA$ was singular, you weren't quite able to turn $\mU$ into $\mI$. 15 | You turned it into something that isn't of full-rank. 16 | 17 | \item What we can do is apply the matrix we've built up to a vector 18 | $\vy$. If the result lies in the columnspace of $\mR$ (the row reduced 19 | echelon form), then this is the inverse image. If not, then $\vy$ is 20 | outside the columnspace of $\mA$. 21 | 22 | \item Note that since $\mR$ makes calculation of the nullspace simple. 23 | The subspace of solutions has rank equal to the rank of the nullspace. 24 | \end{enumerate} 25 | -------------------------------------------------------------------------------- /math/strang-algebra/03d-more-vector-spaces.tex: -------------------------------------------------------------------------------- 1 | Defines what it means for a set to be linearly independent. Shows that 2 | the columns are linearly independent exactly when the rank is full. 3 | 4 | Talks about the row space is the column-space of $\mAt$. Duh. And I know 5 | that means exactly that the row-space is perpindicular to the nullspace. 6 | 7 | We can talk about a basis for a matrix space. 8 | -------------------------------------------------------------------------------- /math/strang-algebra/04a-orthogonal-subspaces.tex: -------------------------------------------------------------------------------- 1 | As mentioned, row space is perpindicular to the nullspace. Likewise, the 2 | nullspace of $\mAt$ is perpindicular to the columnspace. We say that 3 | these pairs are "orthogonal complements", in that they each contain all 4 | the vectors perpindicular to the other. 5 | 6 | I must have noted this elsewhere, but since the transformation on the 7 | row-space to the columnspace is one-to-one, then the rank of the row 8 | and column spaces are equal. 9 | 10 | He mentions, the transformation from the rowspace to the columnspace is 11 | \emph{always} invertible. Basically, if you leave out the nullspace. 12 | -------------------------------------------------------------------------------- /math/strang-algebra/04b-simplest-projections.tex: -------------------------------------------------------------------------------- 1 | We'll start talking about \define{projection matrices}. This projects a 2 | point into a subspace spanned by some vectors $\va_i$. 3 | 4 | The simplest projection matrices project onto a single vector $\va$. To 5 | do this, we know that $\vat \vx$ just takes the inner product of $\va$ 6 | and $\vx$, which is how much we want to scale $\va$ to get the "closest" 7 | point to $\vx$ along $\va$. 8 | 9 | Let's take a quick note. We know that $\iprod{\vx - \iprod{x}{a}}{a} = 10 | 0$. That is, the error is perpindicular to $\va$. Note that this will 11 | *always* be part of the error of $\vx - r\va$, for any scalar $r$. So 12 | all we can do is eliminate the component of the error along $\va$, which 13 | minimizes the distance to $\vx$. 14 | 15 | Okay. So there needs to be a matrix for this projection operation, 16 | because $\vproj{a}{u+v}=\vproj{a}{u}+\vproj{a}{v}$. I don't prove this. 17 | Instead, I will just show you the matrix: it is $\va \vat$. That 18 | describes exactly the operations I've described above. 19 | -------------------------------------------------------------------------------- /math/strang-algebra/04e-projections-are-symmetric.tex: -------------------------------------------------------------------------------- 1 | Now, $\vu\vut$ can very easily be verified as symmetric. Since $\mA\mAt$ 2 | is just the sum of symmetric matrices, then we know this is symmetric as 3 | well. 4 | 5 | This can also be proven like so: the transpose of $\mA\mAt$ is 6 | $\mA\mAt$. That shows the projection is symmetric. 7 | 8 | \TODO{What is an intuitive reason for symmetry of projections?} What 9 | does symmetry mean? Not all symmetric matrices can be projections, 10 | right? 11 | 12 | Most, why is it worth noting that the matrix is symmetric? How can 13 | that connect to other parts of algebra to give a clearer picture? 14 | -------------------------------------------------------------------------------- /math/strang-algebra/04g-least-squares-approximations.tex: -------------------------------------------------------------------------------- 1 | So you want to fit a linear model, but you have too many datapoints 2 | and not enough parameters. 3 | 4 | Take your entire vector of target variables $\vy$, and view this as a 5 | vector. We want to find the $\theta$ such that $\mX\theta$ minimizes the 6 | squared loss. 7 | 8 | (I'm treating the $\vy$ as a column, and $\mX$'s rows are the predictors 9 | for an example). 10 | 11 | So you can see $\hat{\vy}$ as a function of $\theta$: the function treats 12 | the datapoints as fixed, and theta as the input. Because there are fewer 13 | parameters than datapoints, this is an embeding of a lower 14 | dimensionality space into a higher dimensionality space. 15 | 16 | This is exactly the problem we've seen before. We can simply find: 17 | 18 | \begin{nedqn} 19 | \hat{\theta} 20 | \eqcol 21 | \parensinv{\mXt\mX} \mXt \vy 22 | \\ 23 | \hat{\vy} 24 | \eqcol 25 | \mX \hat{\theta} 26 | \end{nedqn} 27 | 28 | In the next section I will try to give more intuition for this 29 | result. However, this result is very clear from linear algebra. 30 | -------------------------------------------------------------------------------- /math/strang-algebra/04i-gram-schmidt.tex: -------------------------------------------------------------------------------- 1 | He discusses that an orthogonal matrix is its own inverse. 2 | 3 | He notes that because orthogonal matrices don't change lengths, they 4 | can be numerically helpful. 5 | 6 | He describes Gram-Schmidt, which just successively subtracts out 7 | projections onto the previously considered vectors. You of course can 8 | collect these in a matrix R to put them back in. 9 | 10 | The QR decomposition gives us a way to solve least squares. In that 11 | case: 12 | 13 | \begin{nedqn} 14 | \parensinv{\mtxAtA} 15 | \eqcol 16 | \parensinv{ 17 | \parenstrans{\mQ\mR} 18 | \mQ\mR 19 | } 20 | \\ 21 | \eqcol 22 | \parensinv{ 23 | \mRt \mQt \mQ \mR 24 | } 25 | \\ 26 | \eqcol 27 | \parensinv{\mRt\mR} 28 | \end{nedqn} 29 | 30 | So: 31 | 32 | \begin{nedqn} 33 | \parensinv{\mAt\mA} \mAt \vy 34 | \eqcol 35 | \parensinv{\mRt \mR} 36 | \parenstrans{\mQ\mR} 37 | \vy 38 | \\ 39 | \eqcol 40 | \mRinv 41 | \parensinv{\mRt} 42 | \mRt 43 | \mQt 44 | \vy 45 | \\ 46 | \eqcol 47 | \mRinv 48 | \mQt 49 | \vy 50 | \end{nedqn} 51 | 52 | If you already have $\mA$ in $\mQ\mR$ form this should take $n^2$ time. 53 | It's $n^2$ to apply $\mQt \vy$, and it's easy to invert an upper 54 | triangular matrix. 55 | -------------------------------------------------------------------------------- /math/strang-algebra/06b-differential-equations.md: -------------------------------------------------------------------------------- 1 | **TODO**: THis corresponds to section 6.3 which is titled Applications 2 | to Differential Equations. It starts at page 312. I think I can skip 3 | this section at this time, since I haven't had a need for this 4 | knowledge in practice previously. 5 | 6 | I can always return later to this. 7 | -------------------------------------------------------------------------------- /math/strang-algebra/clean-latex: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import sys 4 | sys.path.append("/Users/ruggeri/repos/ruggeri/tech-notes/lib") 5 | 6 | from make_latex import clean 7 | from pathlib import Path 8 | 9 | SOURCE_PATHS = [Path(f) for f in [ 10 | "./strang-algebra.tex" 11 | ]] 12 | 13 | for p in SOURCE_PATHS: 14 | clean(p) 15 | -------------------------------------------------------------------------------- /math/strang-algebra/make-latex: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import sys 4 | sys.path.append("/Users/ruggeri/repos/ruggeri/tech-notes/lib") 5 | 6 | from make_latex import run 7 | from pathlib import Path 8 | 9 | SOURCE_PATHS = [Path(f) for f in [ 10 | "./strang-algebra.tex" 11 | ]] 12 | 13 | run(SOURCE_PATHS) 14 | -------------------------------------------------------------------------------- /math/strang-algebra/strang-algebra.tex: -------------------------------------------------------------------------------- 1 | \documentclass[11pt, oneside]{amsart} 2 | 3 | \usepackage{geometry} 4 | \geometry{letterpaper} 5 | 6 | \usepackage{ned-common} 7 | \usepackage{ned-linear-algebra} 8 | 9 | \begin{document} 10 | 11 | \title{Strang Algebra Notes} 12 | \maketitle 13 | 14 | \input{01a-story-of-the-dot-product.tex} 15 | \input{01b-linear-transformations.tex} 16 | \input{01c-inversion-of-an-orthogonal-matrix.tex} 17 | 18 | \input{02a-gaussian-elimination.tex} 19 | \input{02b-lu-decomposition.tex} 20 | \input{02c-geometric-intuition-of-lu-decomposition.tex} 21 | \input{02d-computing-elimination.tex} 22 | \input{02e-elementary-matrices.tex} 23 | \input{02f-transpose.tex} 24 | 25 | \input{03a-column-and-null-spaces.tex} 26 | \input{03b-calculation-of-null-space.tex} 27 | \input{03c-full-inversion.tex} 28 | \input{03d-more-vector-spaces.tex} 29 | 30 | \input{04a-orthogonal-subspaces.tex} 31 | \input{04b-simplest-projections.tex} 32 | \input{04c-projection-into-subspaces.tex} 33 | \input{04d-more-about-projections.tex} 34 | \input{04e-projections-are-symmetric.tex} 35 | \input{04f-projections-without-orthogonality.tex} 36 | \input{04g-least-squares-approximations.tex} 37 | \input{04h-least-squares-intuition.tex} 38 | \input{04h-b-data-matrix-whitening.tex} 39 | \input{04i-gram-schmidt.tex} 40 | \input{05a-determinants.tex} 41 | 42 | \end{document} 43 | -------------------------------------------------------------------------------- /math/stys/ned-abstract-algebra.sty: -------------------------------------------------------------------------------- 1 | \newcommand{\order}[1]{|#1|} 2 | 3 | \newcommand{\Zmod}[1]{\Z / #1\Z} 4 | \newcommand{\Zmodx}[1]{\parens{\Z / #1\Z}^\times} 5 | 6 | \newcommand{\Zmoda}{\Zmod{a}} 7 | \newcommand{\Zmodax}{\Zmodx{a}} 8 | \newcommand{\Zmodb}{\Zmod{b}} 9 | \newcommand{\Zmodbx}{\Zmodx{b}} 10 | \newcommand{\Zmodn}{\Zmod{n}} 11 | \newcommand{\Zmodnx}{\Zmodx{n}} 12 | \newcommand{\Zmodp}{\Zmod{p}} 13 | \newcommand{\Zmodq}{\Zmod{q}} 14 | \newcommand{\Zmodpx}{\Zmodx{p}} 15 | 16 | \newcommand{\subgroup}{\leq} 17 | 18 | % Galois Field 19 | \newcommand{\GF}[1]{\operatorname{GF}\parens{#1}} 20 | -------------------------------------------------------------------------------- /math/stys/ned-calculus.sty: -------------------------------------------------------------------------------- 1 | \ProvidesPackage{ned-calculus} 2 | 3 | %% Derivatives 4 | 5 | \newcommand{\fderiv}[1]{ 6 | \frac{ 7 | \diff{} 8 | }{ 9 | \diff{#1} 10 | } 11 | } 12 | 13 | % Named derivatives 14 | \newcommand{\fderivx}{ 15 | \frac{ 16 | \diff{} 17 | }{ 18 | \dx 19 | } 20 | } 21 | 22 | %% Differentials 23 | 24 | \newcommand{\diff}[1]{\mathop{\mathrm{d}#1}} 25 | 26 | % Named differentials 27 | \newcommand{\dd}{\diff{d}} 28 | \newcommand{\ddp}{\diff{d'}} 29 | \newcommand{\dx}{\diff{x}} 30 | \newcommand{\domega}{\diff{\omega}} 31 | \newcommand{\dr}{\diff{r}} 32 | \newcommand{\dt}{\diff{t}} 33 | \newcommand{\dtp}{\diff{t'}} 34 | \newcommand{\dtpp}{\diff{t''}} 35 | \newcommand{\dtheta}{\diff{\theta}} 36 | \newcommand{\dv}{\diff{v}} 37 | \newcommand{\dy}{\diff{y}} 38 | \newcommand{\dz}{\diff{z}} 39 | 40 | %% Integrals 41 | 42 | \newcommand{\intR}{\int_\reals} 43 | \newcommand{\intevalbar}[2]{\Bigg|_{#1}^{#2}} 44 | % Spacing is a little weird on double integrals. 45 | \newcommand{\nediint}[2]{\int#1 \!\!\! \int#2} 46 | 47 | %% Partial derivatives. 48 | 49 | \newcommand{\fpartial}[2][]{\frac{\partial #1}{\partial #2}} 50 | \newcommand{\fpartialsq}[2][]{\frac{\partial^2 #1}{\partial {#2}^2}} 51 | % Text style version. 52 | \newcommand{\tfpartial}[2]{\tfrac{\partial #1}{\partial #2}} 53 | 54 | % Named Partial derivatives 55 | \newcommand{\fptheta}{\fpartial[]{\theta}} 56 | \newcommand{\fpthetax}{\fpartial[]{\theta^2}} 57 | \newcommand{\tfptheta}{\tfpartial[]{\theta}} 58 | 59 | %% Gradient 60 | 61 | \newcommand{\grad}{\nabla} 62 | \newcommand{\gradient}{\nabla} 63 | -------------------------------------------------------------------------------- /math/stys/ned-common.sty: -------------------------------------------------------------------------------- 1 | \ProvidesPackage{ned-common} 2 | 3 | %% Commonly used libraries. 4 | 5 | % AMS libs 6 | \usepackage{amsmath} 7 | \usepackage{amssymb} 8 | \usepackage{amsthm} 9 | % The version of boldsymbol from bm apparently obsolesces the version 10 | % from amsmath 11 | \usepackage{bm} 12 | % Needed for \intertext in IEEE? 13 | \usepackage{etoolbox} 14 | \usepackage{hyperref} 15 | % You may need to use \texorpdfstring when section titles contain math. 16 | \hypersetup{ 17 | colorlinks=true, 18 | urlcolor=blue, 19 | } 20 | % For IEEEeqnarray 21 | \usepackage{IEEEtrantools} 22 | \usepackage{mathtools} 23 | % nedqn is my default equation array environment 24 | \usepackage{ned-nedqn} 25 | 26 | %% Package components 27 | 28 | \usepackage{ned-common/fractions-powers} 29 | \usepackage{ned-common/misc} 30 | \usepackage{ned-common/parens} 31 | \usepackage{ned-common/parens-fns} 32 | \usepackage{ned-common/sets} 33 | \usepackage{ned-common/symbols} 34 | -------------------------------------------------------------------------------- /math/stys/ned-common/fractions-powers.sty: -------------------------------------------------------------------------------- 1 | %% Fractions 2 | 3 | \newcommand{\half}{\frac{1}{2}} 4 | % "over two" 5 | \newcommand{\otwo}[1]{\frac{#1}{2}} 6 | 7 | %% Powers 8 | 9 | \newcommand{\inv}[1]{{#1}^{-1}} 10 | \newcommand{\invf}[1]{\frac{1}{#1}} 11 | % There is probably little reason to prefer \inv to \invx... 12 | \newcommand{\invx}{^{-1}} 13 | \newcommand{\negsqrt}{^{-1/2}} 14 | -------------------------------------------------------------------------------- /math/stys/ned-common/misc.sty: -------------------------------------------------------------------------------- 1 | %% Definition Symbols 2 | 3 | \newcommand{\defeq}{\coloneqq} 4 | \newcommand{\minuseq}{\mathrel{-}=} 5 | 6 | %% Theorem Environments. 7 | 8 | % Theorems should be numbered based on subsection. In fact, everything 9 | % should be numbered equally. 10 | % 11 | % Italics environments. 12 | \theoremstyle{plain} \newtheorem{theorem}{Theorem}[subsection] 13 | \theoremstyle{plain} \newtheorem{corollary}[theorem]{Corollary} 14 | \theoremstyle{plain} \newtheorem{lemma}[theorem]{Lemma} 15 | \theoremstyle{plain} \newtheorem{proposition}[theorem]{Proposition} 16 | 17 | % Plain text environments. 18 | \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} 19 | \theoremstyle{definition} \newtheorem{discussion}[theorem]{Discussion} 20 | \theoremstyle{definition} \newtheorem{example}[theorem]{Example} 21 | \theoremstyle{definition} \newtheorem{remark}[theorem]{Remark} 22 | 23 | %% Sinusoidals 24 | 25 | \newcommand{\sinf}[1]{\sin\parens{#1}} 26 | \newcommand{\cosf}[1]{\cos\parens{#1}} 27 | \newcommand{\tanf}[1]{\tan\parens{#1}} 28 | 29 | % arc functions 30 | \newcommand{\asinf}[1]{\arcsin\parens{#1}} 31 | \newcommand{\acosf}[1]{\arccos\parens{#1}} 32 | \newcommand{\atanf}[1]{\arctan\parens{#1}} 33 | 34 | \DeclareMathOperator{\atantwo}{arctan2} 35 | \newcommand{\atantwof}[1]{\atantwo\parens{#1}} 36 | 37 | %% Other 38 | 39 | \newcommand{\abs}[1]{\left|#1\right|} 40 | \newcommand{\bsum}[0]{\sum\limits} 41 | \newcommand{\conj}[1]{\overline{#1}} 42 | \newcommand{\define}[1]{\textit{#1}} 43 | \newcommand{\TODO}[1]{\textbf{TODO: #1}} 44 | 45 | \DeclareMathOperator{\lcm}{lcm} 46 | 47 | \newcommand{\bigo}{\mathcal{O}} 48 | -------------------------------------------------------------------------------- /math/stys/ned-common/parens-fns.sty: -------------------------------------------------------------------------------- 1 | %% expf functions 2 | 3 | % Better name than nexp? 4 | \newcommand{\expf}[1]{ 5 | \exp\parens{#1} 6 | } 7 | 8 | \newcommand{\nexp}[1]{ 9 | \exp\parens{#1} 10 | } 11 | 12 | %% Various sizes 13 | 14 | \newcommand{\nexpb}[1]{ 15 | \exp\bparens{#1} 16 | } 17 | 18 | \newcommand{\nexpB}[1]{ 19 | \exp\Bparens{#1} 20 | } 21 | 22 | \newcommand{\nexpBb}[1]{ 23 | \exp\Bbparens{#1} 24 | } 25 | 26 | %% Lambda, varphi functions. 27 | 28 | \newcommand{\lambdaf}[1]{ 29 | \lambda\parens{#1} 30 | } 31 | 32 | \newcommand{\lcmf}[1]{ 33 | \lcm\parens{#1} 34 | } 35 | 36 | \newcommand{\vphif}[1]{ 37 | \varphi\parens{#1} 38 | } 39 | 40 | \newcommand{\bigof}[1]{ 41 | \bigo\parens{#1} 42 | } 43 | 44 | \newcommand{\detf}[1]{ 45 | \det\parens{#1} 46 | } 47 | 48 | \newcommand{\gcdf}[1]{ 49 | \gcd\parens{#1} 50 | } 51 | 52 | \newcommand{\logf}[1]{ 53 | \log\parens{#1} 54 | } 55 | 56 | \newcommand{\maxf}[1]{ 57 | \max\parens{#1} 58 | } 59 | 60 | \newcommand{\minf}[1]{ 61 | \min\parens{#1} 62 | } 63 | 64 | \DeclarePairedDelimiter{\floor}{\lfloor}{\rfloor} 65 | \DeclareMathOperator{\sgn}{sgn} 66 | \newcommand{\sgnf}[1]{ 67 | \sgn\parens{#1} 68 | } 69 | 70 | \newcommand{\anglef}[1]{ 71 | \langle #1 \rangle 72 | } 73 | 74 | \newcommand{\ff}[1]{ 75 | f\parens{#1} 76 | } 77 | 78 | \newcommand{\phif}[1]{ 79 | \phi\parens{#1} 80 | } 81 | 82 | \newcommand{\xf}[1]{ 83 | x\parens{#1} 84 | } 85 | 86 | \newcommand{\vf}[1]{ 87 | v\parens{#1} 88 | } 89 | -------------------------------------------------------------------------------- /math/stys/ned-common/parens.sty: -------------------------------------------------------------------------------- 1 | %% Parentheses 2 | 3 | % Why mathclose? 4 | % https://tex.stackexchange.com/questions/2607/spacing-around-left-and-right 5 | \newcommand{\parens}[1]{ 6 | \mathopen{}\left( 7 | {#1} 8 | \right)\mathclose{} 9 | } 10 | 11 | \newcommand{\sqbrackets}[1]{ 12 | \mathopen{}\left[ 13 | {#1} 14 | \right]\mathclose{} 15 | } 16 | 17 | % Parenthesis sizes 18 | \newcommand{\bparens}[1]{\bigl({#1}\bigr)} 19 | \newcommand{\Bparens}[1]{\Bigl({#1}\Bigr)} 20 | \newcommand{\Bbparens}[1]{\biggl({#1}\biggr)} 21 | 22 | % Various superscript operations look better "pulled in" to the macro 23 | % because of mathclose. 24 | 25 | \newcommand{\parensexp}[2]{ 26 | \mathopen{}\left( 27 | {#1} 28 | \right)^{#2}\mathclose{} 29 | } 30 | 31 | %% Parentheses inverse 32 | \newcommand{\parensinv}[1]{ 33 | \parensexp{#1}{-1} 34 | } 35 | \newcommand{\Bparensinv}[1]{ 36 | \mathopen{}\Bigl( 37 | {#1} 38 | \Bigr)^{-1}\mathclose{} 39 | } 40 | 41 | %% Parentheses squared 42 | \newcommand{\parensq}[1]{ 43 | \parensexp{#1}{2} 44 | } 45 | 46 | %% Parentheses sqrt 47 | \newcommand{\parensqrt}[1]{ 48 | \parensexp{#1}{1/2} 49 | } 50 | 51 | %% Parentheses negsqrt 52 | \newcommand{\parensnegsqrt}[1]{ 53 | \mathopen{}\left( 54 | {#1} 55 | \right)\negsqrt\mathclose{} 56 | } 57 | 58 | %% Parentheses transpose 59 | \newcommand{\parenstrans}[1]{ 60 | \mathopen{}\left( 61 | {#1} 62 | \right)\tran\mathclose{} 63 | } 64 | -------------------------------------------------------------------------------- /math/stys/ned-common/sets.sty: -------------------------------------------------------------------------------- 1 | \newcommand{\setof}[1]{ 2 | \left\{ #1 \right\} 3 | } 4 | 5 | \newcommand{\Field}{\mathbb{F}} 6 | \newcommand{\F}{\Field} 7 | 8 | % "Undefine" \C, which is provided by `hyperref`. Omg. Horrible naming 9 | % guys. 10 | \let\C\undefined 11 | \newcommand{\C}{\mathbb{C}} 12 | \newcommand{\reals}{\mathbb{R}} 13 | \newcommand{\R}{\mathbb{R}} 14 | 15 | \newcommand{\rn}{\reals^n} 16 | \newcommand{\N}{\mathbb{N}} 17 | \newcommand{\Z}{\mathbb{Z}} 18 | 19 | \newcommand{\powerset}[1]{ 20 | \mathcal{P}\parens{#1} 21 | } 22 | -------------------------------------------------------------------------------- /math/stys/ned-common/symbols.sty: -------------------------------------------------------------------------------- 1 | \newcommand{\ttilde}{\tilde{t}} 2 | \newcommand{\vtilde}{\tilde{v}} 3 | -------------------------------------------------------------------------------- /math/stys/ned-linear-algebra.sty: -------------------------------------------------------------------------------- 1 | \ProvidesPackage{ned-linear-algebra} 2 | 3 | \usepackage{ned-linear-algebra/matrices} 4 | \usepackage{ned-linear-algebra/operators} 5 | \usepackage{ned-linear-algebra/subscripts} 6 | \usepackage{ned-linear-algebra/vectors} 7 | 8 | % To get both bold and roman letters, the bm documentation describes 9 | % source: http://www.ctex.org/documents/packages/math/bm.pdf 10 | \newcommand{\mathrmbold}[1]{ 11 | \bm{\mathrm{#1}} 12 | } 13 | -------------------------------------------------------------------------------- /math/stys/ned-linear-algebra/operators.sty: -------------------------------------------------------------------------------- 1 | %% Inner Product 2 | 3 | \newcommand{\innerprod}[2]{ 4 | \left\langle #1, #2 \right\rangle 5 | } 6 | % Alias 7 | \newcommand{\iprod}[2]{\innerprod{#1}{#2}} 8 | 9 | %% Cross Product 10 | 11 | % Just an alias, because I am stupid and hate \times. 12 | \newcommand{\cross}[0]{\times} 13 | 14 | %% Norm 15 | 16 | \newcommand{\norm}[1]{ 17 | \left\Vert #1 \right\Vert 18 | } 19 | \newcommand{\normsq}[1]{ 20 | \left\Vert #1 \right\Vert^2 21 | } 22 | 23 | %% Transpose 24 | 25 | \newcommand*{\tran}{ 26 | ^{\mkern-1.5mu\textsc{T}} 27 | } 28 | 29 | % Feels dumb, but otherwise you get "double superscript" problems. 30 | \newcommand*{\ptran}{ 31 | ^{\prime\mkern-1.5mu\textsc{T}} 32 | } 33 | 34 | \newcommand{\vectransvec}[1]{ 35 | \parenstrans{#1} 36 | \parens{#1} 37 | } 38 | 39 | \DeclareMathOperator{\proj}{proj} 40 | \newcommand{\vproj}[2]{\proj_{#1}\parens{#2}} 41 | -------------------------------------------------------------------------------- /math/stys/ned-linear-algebra/subscripts.sty: -------------------------------------------------------------------------------- 1 | %% Index Subscripts 2 | 3 | \newcommand{\subii}{_{i, i}} 4 | 5 | %% Vector Subscripts 6 | 7 | \newcommand{\seqt}{_{1:t}} 8 | 9 | \newcommand{\subvmu}{_{\vmu}} 10 | \newcommand{\subvmux}{_{\vmux}} 11 | \newcommand{\subvx}{_{\vecx}} 12 | \newcommand{\subvy}{_{\vecy}} 13 | \newcommand{\subvz}{_{\vecz}} 14 | 15 | %% Matrix Subscripts 16 | 17 | \newcommand{\submA}{_{\mtxA}} 18 | \newcommand{\submB}{_{\mtxB}} 19 | -------------------------------------------------------------------------------- /math/stys/ned-stats.sty: -------------------------------------------------------------------------------- 1 | \ProvidesPackage{ned-stats} 2 | 3 | \usepackage{ned-stats/ned-stats-constants} 4 | \usepackage{ned-stats/ned-stats-gaussians} 5 | \usepackage{ned-stats/ned-stats-operators} 6 | \usepackage{ned-stats/ned-stats-probability} 7 | \usepackage{ned-stats/ned-stats-symbols} 8 | -------------------------------------------------------------------------------- /math/stys/ned-stats/ned-stats-constants.sty: -------------------------------------------------------------------------------- 1 | %% Constants 2 | 3 | \newcommand{\twopi}{2\pi} 4 | 5 | %% Sqrts 6 | 7 | \newcommand{\sqrtpi}{\sqrt{\pi}} 8 | \newcommand{\sqrttwo}{\sqrt{2}} 9 | \newcommand{\sqrttwopi}{\sqrt{\twopi}} 10 | 11 | %% Inverses 12 | 13 | \newcommand{\invsqrtpi}{\frac{1}{\sqrtpi}} 14 | \newcommand{\invsqrttwopi}{\frac{1}{\sqrttwopi}} 15 | -------------------------------------------------------------------------------- /math/stys/ned-stats/ned-stats-operators.sty: -------------------------------------------------------------------------------- 1 | %% Expectation/Variance 2 | 3 | \let\oldPr\Pr 4 | \renewcommand{\Pr}[1]{\oldPr\parens{#1}} 5 | 6 | \newcommand{\expectation}[1]{ 7 | \mathbb{E} 8 | \left[ 9 | {#1} 10 | \right] 11 | } 12 | 13 | \newcommand{\Var}[1]{ 14 | \operatorname{Var} 15 | \left[ 16 | #1 17 | \right] 18 | } 19 | 20 | %% Likelihood 21 | 22 | \newcommand{\lhood}[2]{ 23 | \mathcal{L}\parens{ 24 | #1 25 | ; 26 | #2 27 | } 28 | } 29 | 30 | \newcommand{\llhood}[2]{ 31 | \log\lhood{#1}{#2} 32 | } 33 | 34 | \newcommand{\nllhood}[2]{ 35 | -\llhood{#1}{#2} 36 | } 37 | -------------------------------------------------------------------------------- /math/stys/ned-stats/ned-stats-probability.sty: -------------------------------------------------------------------------------- 1 | %% Conditional Probability functions 2 | 3 | \newcommand{\condpr}[2]{ 4 | \pr{#1 | #2} 5 | } 6 | 7 | % unnormalized conditional probability 8 | \newcommand{\tcondpr}[2]{ 9 | \tpr{#1 | #2} 10 | } 11 | 12 | %% Unconditional Density Functions 13 | 14 | % Probability density function 15 | \newcommand{\pr}[1]{ 16 | p\parens{#1} 17 | } 18 | 19 | % Unnormalized density function 20 | \newcommand{\tpr}[1]{ 21 | \tilde{p}\parens{#1} 22 | } 23 | 24 | %% Named Helpers 25 | 26 | % x 27 | \newcommand{\px}{\pr{x}} 28 | % conditional prob of x given mu_x 29 | \newcommand{\cpxmux}{\condpr{x}{\mux}} 30 | % conditional prob of vector x given mu_x 31 | \newcommand{\cpvxmux}{\condpr{\vecx}{\vmux}} 32 | 33 | % z 34 | \newcommand{\cpvz}[1]{\condpr{\vz}{#1}} 35 | 36 | % mu_x 37 | \newcommand{\pmux}{\pr{\mux}} 38 | % vector mu_x 39 | \newcommand{\pvmux}{\pr{\vmux}} 40 | % conditional probability of mu_x given x 41 | \newcommand{\cpmuxx}{\condpr{\mux}{x}} 42 | % conditional probability of vector mu_x given x 43 | \newcommand{\cpvmuxx}{\condpr{\vmux}{\vecx}} 44 | % unnormalized conditional probability of mu_x given x. 45 | \newcommand{\tcpmuxx}{\tcondpr{\mux}{x}} 46 | -------------------------------------------------------------------------------- /math/stys/ned-stats/ned-stats-symbols.sty: -------------------------------------------------------------------------------- 1 | %% Dataset 2 | 3 | \newcommand{\dset}{\mathcal{D}} 4 | 5 | % Helper for iteration over dataset 6 | \newcommand{\xinD}{x_i \in \dset} 7 | 8 | %% Mu 9 | 10 | % mu x 11 | \newcommand{\mux}{\mu_x} 12 | \newcommand{\vmux}{\vecmu_{\vecx}} 13 | 14 | % mu mu_x 15 | \newcommand{\mumux}{\mu_{\mu_x}} 16 | \newcommand{\mumuxp}{ 17 | \mu'_{\mux} 18 | } 19 | \newcommand{\vmumux}{\vmu_{\vmux}} 20 | \newcommand{\vmumuxp}{\vmup_{\vmux}} 21 | 22 | %% Rho 23 | 24 | % rho mu_x 25 | \newcommand{\rhomux}{\rho_{\mux}} 26 | \newcommand{\rhomuxp}{\rho^\prime_{\mux}} 27 | 28 | % rho x 29 | \newcommand{\rhox}{\rho_{x}} 30 | 31 | %% Sigma 32 | 33 | % sigma x 34 | \newcommand{\sigmax}{\sigma_x} 35 | % sigma mu_x 36 | \newcommand{\sigmamux}{\sigma_{\mux}} 37 | \newcommand{\sigmamuxp}{\sigma^{\prime}_{\mux}} 38 | 39 | %% Variance 40 | 41 | \newcommand{\var}{\sigma^2} 42 | \newcommand{\vecvar}{\vecsigma^{2}} 43 | 44 | %% Covariance matrix 45 | 46 | \newcommand{\covmtx}{\mtx{\Sigma}} 47 | \newcommand{\covmtxp}{\mtx{\Sigma}^{\prime}} 48 | 49 | %% Named covariance matrices 50 | 51 | % Covariance for operations. 52 | \newcommand{\covmtxA}{\covmtx\submA} 53 | \newcommand{\covmtxB}{\covmtx\submB} 54 | \newcommand{\covmtxBA}{\covmtx_{\mtxBA}} 55 | 56 | % Covariance for variables. 57 | \newcommand{\covmtxmux}{\covmtx\subvmux} 58 | \newcommand{\covmtxx}{\covmtx\subvx} 59 | \newcommand{\covmtxz}{\covmtx\subvz} 60 | 61 | %% Named variances 62 | 63 | % var mu_x 64 | \newcommand{\varmux}{\var_{\mux}} 65 | \newcommand{\varmuxp}{ 66 | \sigma^{'2}_{\mux} 67 | } 68 | 69 | % var x 70 | \newcommand{\varx}{\var_x} 71 | -------------------------------------------------------------------------------- /programming-languages/all-the-casts.md: -------------------------------------------------------------------------------- 1 | ## `static_cast` 2 | 3 | `static_cast(x)` converts an `X` instance to a `Y` instance. A 4 | common use of this is to do an unchecked downcast: 5 | `static_cast(animalp)`. Of course, this may not be safe. 6 | 7 | This is also used for conversion between ints and floats. 8 | 9 | You could do such a cast via `(Kitten*) animalp`, but this is unsafe, 10 | since C allows you to cast to any kind of object. 11 | 12 | ## `dynamic_cast` 13 | 14 | `dynamic_cast(x)` converts an `X` to a `Y`, but checks to make sure 15 | that the `X` is truly a `Y` before doing so. Else, it throws an 16 | exception. 17 | 18 | For this to work, the `X` class must have RTTI recorded. I believe you 19 | can force this to happen by adding a virtual destructor method. 20 | 21 | ## `reinterpret_cast` 22 | 23 | This is used to literally reinterpret one object as another. For 24 | instance, if you have a byte array representing an object, you might 25 | try to do a reinterpret cast to interpret as the object. 26 | 27 | ## With Multiple Inheritance 28 | 29 | Note that `static_cast` and `dynamic_cast` must sometimes perform 30 | pointer fixups in the presence of 31 | multiple-inheritance. `reinterpret_cast`, in that case, would do the 32 | wrong thing. 33 | 34 | ## `const_cast` 35 | 36 | This removes const `const T* tp; const_cast(tp)`. 37 | 38 | ## References 39 | 40 | http://www.cplusplus.com/doc/tutorial/typecasting/ 41 | -------------------------------------------------------------------------------- /programming-languages/features.md: -------------------------------------------------------------------------------- 1 | * Agents/Actors/CSP 2 | * Clojure 3 | * Scala 4 | * Go 5 | * Algebraic types 6 | * Haskell 7 | * Blocks 8 | * Ruby 9 | * Coroutines, generators, asyncio 10 | * Node, Python 11 | * FRP 12 | * Elm 13 | * Laziness 14 | * Haskell 15 | * Macros 16 | * Common Lisp, Scheme, Clojure 17 | * Process Isolation, Message Passing 18 | * Erlang, Elixir 19 | * Purity, monads 20 | * Haskell 21 | * Reflection 22 | * Resource Lifetime 23 | * Rust 24 | * Restarts for exceptions 25 | * Common Lisp 26 | * Structural typing 27 | * Go 28 | -------------------------------------------------------------------------------- /programming-languages/generators.js: -------------------------------------------------------------------------------- 1 | function genToFunc (f) { 2 | var fIter; 3 | return (function (arg) { 4 | if (!fIter) { 5 | fIter = f(arg); 6 | return fIter.next().value; 7 | } else { 8 | return fIter.next(arg).value; 9 | } 10 | }); 11 | } 12 | 13 | var f = genToFunc(function* (arg) { 14 | while (true) { 15 | console.log("f: " + arg) 16 | arg = yield [g, arg + 1]; 17 | } 18 | }); 19 | 20 | var g = genToFunc(function* (arg) { 21 | while (true) { 22 | console.log("g: " + arg) 23 | arg = yield [h, arg + 1]; 24 | } 25 | }); 26 | 27 | var h = genToFunc(function* (arg) { 28 | while (true) { 29 | console.log("h: " + arg) 30 | arg = yield [f, arg + 1]; 31 | } 32 | }); 33 | 34 | function trampoline () { 35 | var nextFun = f; 36 | var nextVal = 1; 37 | // keeps passing control amongs the three 38 | setInterval(function () { 39 | var value = nextFun(nextVal); 40 | nextFun = value[0]; 41 | nextVal = value[1]; 42 | }, 500); 43 | } 44 | 45 | // Should print in sequence. 46 | trampoline(); 47 | -------------------------------------------------------------------------------- /programming-languages/haskell/xx-todo.md: -------------------------------------------------------------------------------- 1 | # TODOs 2 | 3 | * Copy over my RPN code. 4 | * As we approach the thrilling conclusion, there are a few LYAH pages: 5 | * http://learnyouahaskell.com/zippers 6 | * Want to understand what `<*>` means for `Writer` and `State`. 7 | * Read about monad transformers 8 | * This is heavily used for `Writer` and `State`. I want to know what 9 | it is about. 10 | * Could it improve my RPN code? 11 | * `zipWithM` 12 | * I've also heard talk about `Arrow`. What is that? 13 | * I may want to browse the Haskell source code more: 14 | * Might peak around `GHC.Base` for more info. 15 | * I think there's more functionality in `Control.Monad`. 16 | * Random: 17 | * What is `MonadFail`, `MonadTrans`, `MonadIO`? 18 | * What is `callCC`. 19 | 20 | ## Many Books/Haskell Resources 21 | 22 | * http://www.apress.com/9781430262503 23 | * http://www.cambridge.org/us/academic/subjects/computer-science/programming-languages-and-applied-logic/thinking-functionally-haskell 24 | * http://chimera.labs.oreilly.com/books/1230000000929 25 | * http://www.cs.nott.ac.uk/~gmh/book.html 26 | * http://www.haskellcraft.com/craft3e/Home.html 27 | * http://www.yesodweb.com/book 28 | * https://www.haskell.org/tutorial/ 29 | * https://en.wikibooks.org/wiki/Yet_Another_Haskell_Tutorial/Preamble 30 | * https://en.wikibooks.org/wiki/Write_Yourself_a_Scheme_in_48_Hours 31 | * https://wiki.haskell.org/Haskell 32 | * http://dev.stephendiehl.com/hask 33 | * http://www.vex.net/~trebla/haskell/index.xhtml 34 | * https://wiki.haskell.org/Learning_Haskell 35 | * https://wiki.haskell.org/How_to_write_a_Haskell_program 36 | * https://wiki.haskell.org/Typeclassopedi 37 | * https://news.ycombinator.com/item?id=10008169 38 | -------------------------------------------------------------------------------- /programming-languages/interface-dispatch.cc: -------------------------------------------------------------------------------- 1 | #include 2 | 3 | struct B1 { 4 | virtual void method1() { 5 | std::cout << "METHOD1\n"; 6 | } 7 | 8 | virtual ~B1() {} 9 | }; 10 | 11 | struct B2 { 12 | virtual void method2() { 13 | std::cout << "METHOD2\n"; 14 | } 15 | 16 | virtual ~B2() {} 17 | }; 18 | 19 | struct Child : B1, B2 { 20 | virtual ~Child() {} 21 | }; 22 | 23 | int main() { 24 | Child* c = new Child(); 25 | 26 | c->method1(); 27 | c->method2(); 28 | 29 | std::cout << c << "\n"; 30 | std::cout << ((B1*) c) << "\n"; 31 | // Note the fixup! 32 | std::cout << ((B2*) c) << "\n"; 33 | 34 | B2* b2 = c; 35 | b2->method2(); 36 | std::cout << b2 << "\n"; 37 | B1* b1 = dynamic_cast(b2); 38 | b1->method1(); 39 | // Note the unfixup! 40 | std::cout << b1 << "\n"; 41 | 42 | b2 = c; 43 | b1 = reinterpret_cast(b2); 44 | // Method two gets called! 45 | b1->method1(); 46 | 47 | b2 = static_cast(c); 48 | c = static_cast(b2); 49 | c->method1(); 50 | 51 | return 0; 52 | } 53 | -------------------------------------------------------------------------------- /programming-languages/list.rs: -------------------------------------------------------------------------------- 1 | // http://cglab.ca/~abeinges/blah/too-many-lists/book/ 2 | 3 | // Even though the items are heap allocated now, they are owned by a 4 | // single List. Prepending still requires copying the entire list, it 5 | // just doesn't invalidate the old one, which can live separately. 6 | // Maybe it would be nice to refcount the next? 7 | 8 | #[derive(Clone)] 9 | enum List { 10 | Cons { val: T, next: Box> }, 11 | Nil 12 | } 13 | 14 | use List::*; 15 | 16 | impl List { 17 | fn new(val: T, next: &List) -> List { 18 | Cons { 19 | val: val, 20 | next: Box::new(next.clone()) 21 | } 22 | } 23 | } 24 | 25 | fn main() { 26 | let v = List::new(32, &Nil); 27 | let v2 = List::new(64, &v); 28 | let v3 = List::new(128, &v2); 29 | 30 | let mut x = &v3; 31 | while let Cons { val, next } = x { 32 | println!("Val: {}", val); 33 | x = next; 34 | } 35 | } 36 | -------------------------------------------------------------------------------- /programming-languages/multi_parameter_type_class.hs: -------------------------------------------------------------------------------- 1 | {-# LANGUAGE MultiParamTypeClasses #-} 2 | 3 | data NList e = Nil 4 | | Cons e (NList e) 5 | 6 | class Collection c where 7 | contains :: Eq e => e -> c e -> Bool 8 | isEmpty :: c e -> Bool 9 | 10 | instance Collection NList where 11 | --contains :: Int -> NList Int -> Bool 12 | contains e Nil = False 13 | contains e1 (Cons e2 rest) = True --(e1 == e2) `or` (contains e1 rest) 14 | 15 | --isEmpty :: NList Int -> Bool 16 | isEmpty Nil = True 17 | isEmpty _ = False 18 | -------------------------------------------------------------------------------- /programming-languages/paredit-notes.md: -------------------------------------------------------------------------------- 1 | * Use `M-(` to wrap a word in parens. You can select text to wrap 2 | multiple words. 3 | * To unwrap an s-expr, use `M-s`. 4 | * To comment out a bunch of lisp, use `M-;`. 5 | -------------------------------------------------------------------------------- /programming-languages/rank_n_example.hs: -------------------------------------------------------------------------------- 1 | {-# LANGUAGE RankNTypes #-} 2 | 3 | id :: forall a. a -> a 4 | id x = x 5 | 6 | type IdFunc = forall a. a -> a 7 | id2 :: IdFunc 8 | id2 x = x 9 | -------------------------------------------------------------------------------- /programming-languages/rust/interesting-libraries.txt: -------------------------------------------------------------------------------- 1 | Clap: Command line args. 2 | Crossbeam: concurrency. 3 | Diesel: ORM. 4 | Rusti: Interactive Rust. 5 | Serde: Serialization/deserialization framework. 6 | -------------------------------------------------------------------------------- /programming-languages/tagged-ptrs.md: -------------------------------------------------------------------------------- 1 | ## Pointer Tagging 2 | 3 | Normally a variable in a dynamic language stores a reference (i.e., 4 | pointer) to a memory location where the object lives. The object needs 5 | to store its type. 6 | 7 | This is annoying for primitive types, which then have to get allocated 8 | on the heap. They're also subject to garbage collection too. 9 | 10 | Therefore, we use the fact that all objects will be aligned to, say, a 11 | 4byte boundary. That leaves a couple bits that will always be zero in 12 | a pointer. 13 | 14 | You use one of these bits to mean "I'm an integer, not a 15 | pointer". The other 63 bits can store the integer. 16 | 17 | Such an object is often called `Fixnum` in Ruby and Lisp. Note that it 18 | has a range befitting a 63bit quantity. 19 | 20 | ## Integer Promotion 21 | 22 | For 64bit ints, you could a reference to a word storing the integer 23 | value. However, you typically jump right to a multi-word bignum 24 | representation, which is of course slow. 25 | 26 | The language should do the promotion once you roll past the boundary, 27 | meaning math is always correct. 28 | 29 | This means every fixnum operation must involve a rangecheck. Also, a 30 | call site returning a fixnum today could return a bignum tomorrow, 31 | creating some complexity for optimization. 32 | 33 | Ruby represents bignums with a length and a pointer to a series of 34 | contiguous 32bit blocks. The first block represents bytes 0-32, the 35 | next 33-64, etc. 36 | 37 | Bignums are their own object, sadly. However, for bignums that are 38 | small (prolly numbers needing exactly 64bits), it can reuse the 39 | pointer to the block representation as an integer. 40 | -------------------------------------------------------------------------------- /programming-languages/unique_ptr.cpp: -------------------------------------------------------------------------------- 1 | #include 2 | #include 3 | #include 4 | 5 | int main() { 6 | std::unique_ptr> p(new std::vector({ 1, 3, 5 })); 7 | 8 | for (auto i : *p) { 9 | std::cout << i << '\n'; 10 | } 11 | 12 | // std::move can be used to generate an rvalue reference to a named 13 | // variable, but it's now on you not to reuse the old value!! 14 | // 15 | // The unique_ptr code is the one that will reset `p`. I think that 16 | // `std::move` doesn't actually change `p` at all; it just casts to 17 | // an rvalue ref so that this can be used in the move constructor, 18 | // which actually makes the changes. 19 | // 20 | // We can remove the assignment to see if the segfault still 21 | // happens! It doesn't! 22 | std::unique_ptr> p2 = std::move(p); 23 | 24 | // Will normally segfault. 25 | for (auto i : *p) { 26 | std::cout << i << '\n'; 27 | } 28 | 29 | return 0; 30 | } 31 | -------------------------------------------------------------------------------- /randos/automotive/engines/04c-v-four.md: -------------------------------------------------------------------------------- 1 | ``` 2 | # We have two cylinder banks. The left and right banks are offset by 90deg. 3 | # L1 and R1 will share a crankpin. L2 and R2 will share a crankpin. 4 | # The crankpins are offset by 180deg. 5 | # We will start with L1 firing and crank toward the right bank. 6 | 7 | - 0deg - 90deg: L1 combustion, R1 compression, L2 intake, R2 exhaust (1x power) 8 | - 90deg - 180deg: L1 combustion, R1 combustion, L2 intake, R2 intake (2x power) 9 | - 180deg - 270deg: L1 exhaust, R1 combustion, L2 compression, R2 intake (1x power) 10 | - 270deg - 360deg: L1 exhaust, R1 exhaust, L2 compression, R2 compression (silence) 11 | - 360deg - 450deg: L1 intake, R1 exhaust, L2 combustion, R2 compression (1x power) 12 | - 450deg - 540deg: L1 intake, R1 intake, L2 combustion, R2 combustion (2x power) 13 | - 540deg - 630deg: L1 compression, R1 intake, L2 exhaust, R2 combustion (1x power) 14 | - 630deg - 720deg: L1 compression, R1 compression, L2 exhaust, R2 exhaust (silence) 15 | 16 | Production motorcycles: 17 | # Aprilia RSV4: 65deg V4 with 180deg crank offset. 18 | # Aprilia Tuono V4: 65deg V4 with 180deg crank offset. 19 | # Ducati Multistrada V4: 90deg V4 with 70deg crank offset. 20 | # Ducati Panigale V4: 90deg V4 with 70deg offset. 21 | 22 | Homologations and discontinued bikes: 23 | # Honda RC213V-S: 90deg V4 with 360deg crank offset. (homologation bike; no longer produced) 24 | # Honda VFR800: 90deg V4 with 180deg crank offset. (Discontinued) 25 | # Yamaha V-MAX: 70deg V4, ??? crank offset? 26 | ``` 27 | -------------------------------------------------------------------------------- /randos/automotive/engines/05-inline-five.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ruggeri/tech-notes/bd3981d766c26105c82bc55ea05fca0dacdb1b42/randos/automotive/engines/05-inline-five.md -------------------------------------------------------------------------------- /randos/automotive/engines/06b-v-six.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ruggeri/tech-notes/bd3981d766c26105c82bc55ea05fca0dacdb1b42/randos/automotive/engines/06b-v-six.md -------------------------------------------------------------------------------- /randos/automotive/engines/10-v-ten.md: -------------------------------------------------------------------------------- 1 | - Basically two inline fives. 2 | - Inline fives have imperfect primary and secondary balance. 3 | - At this point, why not go for the V12, which will have better balance? 4 | - However, V10 engines are not unheard of. 5 | - V10s may be used if a V8 would not provide enough power, and a V12 6 | might be too big. 7 | -------------------------------------------------------------------------------- /randos/automotive/engines/TODO.md: -------------------------------------------------------------------------------- 1 | ## TODO 2 | 3 | Note source: Advanced Engine Technology. 4 | 5 | Bore vs stroke 6 | 7 | https://www.youtube.com/watch?v=UV3RwBPqznU 8 | https://www.youtube.com/watch?v=j8DSBH2GG8s&t=1220s 9 | https://www.youtube.com/watch?v=C_YNn3ZkJmU&t=100s 10 | https://www.cycleworld.com/story/blogs/ask-kevin/how-motorcycle-cylinder-bore-stroke-affect-engine-performance/ 11 | 12 | - https://en.wikipedia.org/wiki/Internal_combustion_engine 13 | - Have to talk about stuff like cams. 14 | - https://en.wikipedia.org/wiki/Valvetrain 15 | - https://en.wikipedia.org/wiki/Camshaft 16 | - https://en.wikipedia.org/wiki/Poppet_valve 17 | - https://en.wikipedia.org/wiki/Overhead_camshaft_engine 18 | - https://en.wikipedia.org/wiki/Timing_belt_(camshaft) 19 | - https://en.wikipedia.org/wiki/Valve_timing 20 | - https://en.wikipedia.org/wiki/Desmodromic_valve 21 | - https://en.wikipedia.org/wiki/Variable_valve_timing 22 | - Engine Power/Torque 23 | - https://en.wikipedia.org/wiki/Stroke_ratio 24 | - https://en.wikipedia.org/wiki/Compression_ratio 25 | - Fuel 26 | - https://en.wikipedia.org/wiki/Carburetor 27 | - https://en.wikipedia.org/wiki/Fuel_injection 28 | - https://en.wikipedia.org/wiki/Fuel_pump 29 | - Other Systems 30 | - Cooling system 31 | - Ignition system, Starting system, Alternator 32 | - Lubrication system 33 | - Super and turbo chargers 34 | - Suspension. 35 | -------------------------------------------------------------------------------- /randos/automotive/engines/engine-balancing.md: -------------------------------------------------------------------------------- 1 | The easiest imbalance to counteract is a primary translational imbalance 2 | of constant norm that rotates direction at the rate of the crankshaft. 3 | This can be counteracted by a crankshaft counterweight. If necessary, to 4 | avoid creating a torque, you would want to use two counterweights that 5 | straddle the CoM. You might use this strategy with a 90deg V-twin. 6 | 7 | Next, let's try to counteract a primary translational imbalance that 8 | varies sinusoidally along a single axis. You want to spin two 9 | counterweights in opposite directions. This requires two balancing 10 | shafts. Each counterweight should be weighted/positioned to offset half 11 | the force you want to counteract. The two weights need to run in 12 | opposite directions so that one dimension of the centripetal force is 13 | canceled, while the other is constructively amplified. You could need 14 | something like this for a 360deg or 270deg twin, a T-Plane triple, or a 15 | crossplane I4. 16 | 17 | To counteract secondary forces, you need a balancing shaft that runs at 18 | 2x the crankshaft speed. You might want these on an I4 to counteract 19 | secondary vibration. 20 | 21 | Let's correct a torque. You should be able to put two counterweights on 22 | a shaft (straddling the CoM), at a angle of 180deg. This will create a 23 | torque that rotates about the axis of the shaft. Again, if the torque 24 | only varies sinusoidally in amplitude in one direction, you need to run 25 | two balance shafts in opposite directions. You might want to do this on 26 | a triple. 27 | -------------------------------------------------------------------------------- /randos/automotive/turbocharging.md: -------------------------------------------------------------------------------- 1 | - Turbocharging is a compressor driven by exhaust gas. The turbine takes 2 | energy from the exhaust gas and uses it to drive a compressor to suck 3 | more air into the engine. 4 | - Thus, it is harnessing waste heat. It is often used to achieve better 5 | efficiency. 6 | - A supercharger is driven off a belt from the engine to run the 7 | compressor. 8 | - Today, it can be electric, which means you might get the energy for 9 | free from regenerative braking. 10 | - Turbochargers can improve efficiency, because they allow a 11 | manufacturer to produce more HP from a smaller engine. And the smaller 12 | engine, when not operating at high RPM or throttle can be more 13 | efficient than a larger engine. 14 | - A turbo added to an engine of the same size should not improve 15 | efficiency. It should reach higher HP, but it will use more fuel as it 16 | sucks more air. 17 | - The supercharger is driven by the engine, so it takes power from the 18 | engine. It is not going to improve efficiency. 19 | - The last point is about turbo lag. The turbo charger has some inertia. 20 | As you floor the throttle, the engine is going to build pressure in 21 | the exhaust, which will then accelerate the turbocharger. But this can 22 | take some time, which is a delay. People feel like this feels 23 | disconnected from the driving experience. 24 | 25 | Source: https://www.youtube.com/watch?v=UUFf4n0MUq8 26 | -------------------------------------------------------------------------------- /randos/bicycles/alfine-igh-exploded-view.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ruggeri/tech-notes/bd3981d766c26105c82bc55ea05fca0dacdb1b42/randos/bicycles/alfine-igh-exploded-view.pdf -------------------------------------------------------------------------------- /randos/bicycles/bike-manufacturers.md: -------------------------------------------------------------------------------- 1 | Here are some of the largest bike manufacturers: 2 | 3 | - Specialized 4 | - Cannondale 5 | - Trek 6 | - Giant 7 | - Bianchi 8 | - Canyon 9 | -------------------------------------------------------------------------------- /randos/bicycles/shimano-ranges.md: -------------------------------------------------------------------------------- 1 | ## Road 2 | 3 | **Shimano** 4 | 5 | 1. Dura-Ace 6 | 2. Ultegra 7 | 3. Shimano 105 8 | 4. Tiagra 9 | 5. Sora 10 | 6. Claris 11 | 12 | **SRAM** 13 | 14 | 1. SRAM Red (highest) 15 | 2. SRAM Force 16 | 3. SRAM Rival 17 | 4. SRAM Apex (lowest) 18 | 19 | ## Mountain 20 | 21 | 1. XTR 22 | 2. XT 23 | 3. SLX 24 | 4. Deore 25 | 5. Acera/Alivio 26 | 27 | Acera/Alivio has been replaced as of 2023 with CUES. 28 | 29 | **SRAM** 30 | 31 | 1. XX1 Eagle (Highest) 32 | 2. X01 Eagle 33 | 3. GX Eagle 34 | 4. NX Eagle 35 | 5. SX Eagle (Lowest) 36 | 37 | ## Gravel 38 | 39 | 1. GRX RX800 is equivalent to Ultegra. 40 | 2. GRX600 is maybe equivalent to Shimano 105. 41 | 3. RX400 is equivalent to Tiagra. 42 | 43 | Source: https://www.rutlandcycling.com/content/all/guide-to-shimano-road-amp-mtb-groupsets.html 44 | -------------------------------------------------------------------------------- /randos/chess-notes.md: -------------------------------------------------------------------------------- 1 | * Alpha-Beta needs move ordering to do well. 2 | * NegaScout 3 | 4 | https://www.chessprogramming.org/Attack_and_Defend_Maps 5 | https://www.chessprogramming.org/Board_Representation 6 | https://www.chessprogramming.org/Encoding_Moves 7 | https://www.chessprogramming.org/Move_Generation 8 | https://www.chessprogramming.org/Move_List 9 | https://www.chessprogramming.org/Move_Ordering 10 | https://www.chessprogramming.org/Square_Attacked_By#0x88_Difference 11 | 12 | https://www.chessprogramming.org/Alpha-Beta 13 | https://www.chessprogramming.org/Minimax 14 | https://www.chessprogramming.org/Selectivity 15 | 16 | https://www.chessprogramming.org/0x88 17 | https://www.chessprogramming.org/Aspiration_Windows 18 | https://www.chessprogramming.org/Delta_Pruning 19 | https://www.chessprogramming.org/Fail-Low 20 | https://www.chessprogramming.org/Fail-High 21 | https://www.chessprogramming.org/Futility_Pruning 22 | https://www.chessprogramming.org/Iterative_Deepening 23 | https://www.chessprogramming.org/MTD(f) 24 | https://www.chessprogramming.org/NegaScout 25 | https://www.chessprogramming.org/Null_Move 26 | https://www.chessprogramming.org/Null_Move_Pruning 27 | https://www.chessprogramming.org/Principal_Variation 28 | https://www.chessprogramming.org/Principal_Variation_Search 29 | https://www.chessprogramming.org/Quiescence_Search 30 | https://www.chessprogramming.org/Transposition_Table 31 | 32 | https://www.chessprogramming.org/Search 33 | https://www.chessprogramming.org/Evaluation 34 | https://www.chessprogramming.org/Opening_Book 35 | https://www.chessprogramming.org/Endgame_Tablebases 36 | 37 | https://www.chessprogramming.org/Recommended_Reading 38 | -------------------------------------------------------------------------------- /randos/computer-science-problem-areas.md: -------------------------------------------------------------------------------- 1 | * Bioinformatics. 2 | * Communication. Network software. 3 | * Cryptography: security and anonymity. 4 | * Document Retrieval. Document summary, categorization. 5 | * Facial Recognition. 6 | * Game Playing. 7 | * Large scale databases for large websites. Entails distributed systems. 8 | * Object detection/recognition/tracking. 9 | * Recommendation. 10 | * Robotic navigation and control. Scene reconstruction. 11 | * Search. Q+A. 12 | * Spam Detection 13 | * Speech Recognition. 14 | * Text Recognition. 15 | * Theorem Proving. Logical deduction/reasoning. Constraint solving. 16 | * Translation. 17 | 18 | ## Most Interesting 19 | 20 | * Robotics, vision, control, navigation. 21 | * Many applications; salesclerks, driverless cars. 22 | * Search, Q+A. Information extraction. 23 | * Theorem proving. Problem solving. 24 | * Have thought about whether this could help our students by 25 | checking their proofs? 26 | * Communication and Networking. 27 | * Better telepresence. 28 | -------------------------------------------------------------------------------- /randos/cpu-architecture.md: -------------------------------------------------------------------------------- 1 | **TODO**: This is a list of CPU architecture subjects. I thought I 2 | wrote all this stuff up long ago... I don't have time right now for 3 | this, but these are the main subjects to writeup when I get around to 4 | it. 5 | 6 | ## CISC and RISC 7 | 8 | CISC combines several operations into a single instruction. In the 9 | early days, CISC was considered an advantage because it meant that 10 | people writing assembly could have an easier life. Also, very early 11 | machines were slow to fetch instructions from memory, so it was a 12 | performance advantage to ask the CPU to do multiple things. 13 | 14 | * CISC, RISC 15 | * VLIW, EPIC 16 | * Flynn's Taxonomy (SIMD etc) 17 | * Vectorized instructions 18 | * Instruction pipeline 19 | * Microarchitecture 20 | * Branch Predictor 21 | * Out-of-order execution 22 | * Memory dependence prediction 23 | * Hazard, Pipeline stall, bubble 24 | * Superscalar processor 25 | * Hyperthreading, SMT 26 | * Speculative execution 27 | * Vector processor 28 | * NX Bit 29 | -------------------------------------------------------------------------------- /randos/cryptocurrency/new-ideas.md: -------------------------------------------------------------------------------- 1 | * Proof of stake 2 | * Delegated proof of stake 3 | * Dapps (Ethereum smart contracts?) 4 | * Zcash: how does that work? 5 | 6 | ## Largest Currencies 7 | 8 | * Bitcoin $141B 9 | * Ethereum $47B 10 | * Faster blocks. 11 | * Can write arbitrary code. 12 | * Ripple $18B 13 | * Consensus algorithm. Doesn't mine. 14 | * High tx rate, low energy use, low transaction cost. 15 | * Bitcoin Cash $14B 16 | * Fork that increased block size to 8MB. 17 | * EOS $7.7B 18 | * Seems like it is similar to ETH. 19 | * Uses proof of stake. 20 | * Stellar $6.5B 21 | * Allows transfer of real world currencies? 22 | * I'm confused how this works. 23 | * Litecoin $5B 24 | * Uses scrypt, which is memory hard. 25 | * Intent is to thrawt ASIC mining. 26 | * Cardano 27 | * Looks again like ETH but with proof of stake. 28 | * IOTA 29 | * People seem to think this is a scam. 30 | * Not sure what it is supposed to offer. 31 | * Tether 32 | * Supposed to be a "stablecoin." 33 | * I think the idea is that they keep dollars somewhere. 34 | * If you want the dollar they burn the coin. 35 | * This is widely believed to be a scam. 36 | * TRON 37 | * Something about a payment system for content. 38 | * Monero 39 | * Uses "ring signatures" to hide where funds are going. 40 | * Also uses some kind of ASIC resistant hashing method. 41 | * NEO 42 | * Like ETH but using proof of stake. 43 | * Dash 44 | * Has some kind of distributed method of decision making about the 45 | future of the currency. 46 | * Faster block interval. Some anonymity. 47 | * ZCash 48 | * More privacy by use of zero knowledge proof. 49 | -------------------------------------------------------------------------------- /randos/cryptocurrency/proof-of-stake-resources.md: -------------------------------------------------------------------------------- 1 | Epic set of tweets in which Vitalik explains PoS work: 2 | 3 | https://twitter.com/VitalikButerin/status/1029903583897051136?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1029903583897051136&ref_url=https%3A%2F%2Fcryptoslate.com%2Fvitalik-buterin-goes-on-tweet-storm-about-ethereums-casper%2F 4 | 5 | Paper for Casper Friendly Finality Gadget: https://arxiv.org/abs/1710.09437 6 | 7 | * Sounds like they admit that long range attacks are possible, *but* 8 | that you should use finality to prevent. 9 | 10 | Minimal Slashing Conditions: https://medium.com/@VitalikButerin/minimal-slashing-conditions-20f0b500fc6c 11 | 12 | * Basic idea is something I think I considered: what if you switch 13 | your vote to an incompatible chain? There are unjustified changes 14 | (like you just go ahead and sign two blocks) or justified changes 15 | (there's more support on the other side). 16 | 17 | On Stake And Consensus: https://download.wpsoftware.net/bitcoin/pos.pdf 18 | 19 | * I think a view of PoS from Bitcoin point of view. 20 | -------------------------------------------------------------------------------- /randos/cuda/even-easier-introduction.md: -------------------------------------------------------------------------------- 1 | There is now a *unified memory* that CUDA offers. Allocate memory via 2 | `cudaMallocManaged` and this memory is accessible via both CPU *and* 3 | GPU! 4 | 5 | One of the advantages is that this lets memory be *migrated* from host 6 | to device as needed. So if you have deeply nested datastructures, you 7 | may not have to move over all allocations. OTOH, that sounds like 8 | something you don't want: you don't want to slowly traverse a 9 | datastructure in the device and then fault and then need to access the 10 | host memory... 11 | 12 | Because you can just get the answer back at the host whenever, you may 13 | want to use `cudaDeviceSynchronize` to *wait* for the answer. But I'm 14 | not sure what would happen if the kernel is not done and you try to 15 | read the CPU data? Is part of `cudaDeviceSynchronize` that it 16 | reconciles host and device memory? 17 | 18 | Indeed the CPU must not use access unified memory while the GPU is 19 | running tasks. You must use synchronize. 20 | 21 | But this does make things much easier to port over, at cost of making 22 | a new implicit requirement of the programmer. 23 | 24 | Note: if you start multiple kernels, they just enter a queue. 25 | -------------------------------------------------------------------------------- /randos/cuda/vector.cu: -------------------------------------------------------------------------------- 1 | #include 2 | 3 | __global__ void mykernel(int* a, int* b, int* c) { 4 | int idx = (blockIdx.x * blockDim.x) + threadIdx.x; 5 | c[idx] = a[idx] * b[idx]; 6 | } 7 | 8 | // Probably needs to be a define since we'll use it in <<< 9 | #define NUM_BLOCKS 8 10 | #define NUM_THREADS_PER_BLOCK 64 11 | 12 | int main() { 13 | // Host 14 | int *a, *b, *c; 15 | // Device pointers 16 | int *d_a, *d_b, *d_c; 17 | int numElements = NUM_BLOCKS * NUM_THREADS_PER_BLOCK; 18 | int arraySize = numElements * sizeof(int); 19 | 20 | // Allocate and initialize host memory. 21 | a = (int *) malloc(arraySize); 22 | b = (int *) malloc(arraySize); 23 | c = (int *) malloc(arraySize); 24 | for (int i = 0; i < numElements; i++) { 25 | a[i] = i; 26 | b[i] = numElements - i; 27 | } 28 | 29 | // Allocate memory on device. Store pointers on host. 30 | // Pretty sure I don't need void** cast here. 31 | cudaMalloc((void**) &d_a, arraySize); 32 | cudaMalloc((void**) &d_b, arraySize); 33 | cudaMalloc((void**) &d_c, arraySize); 34 | 35 | // Copy host values to device. 36 | cudaMemcpy(d_a, a, arraySize, cudaMemcpyHostToDevice); 37 | cudaMemcpy(d_b, b, arraySize, cudaMemcpyHostToDevice); 38 | 39 | // Run kernel. 40 | mykernel<<>>(d_a, d_b, d_c); 41 | 42 | // Copy device values to host 43 | cudaMemcpy(c, d_c, arraySize, cudaMemcpyDeviceToHost); 44 | 45 | // Deallocate memory. 46 | cudaFree(d_a); 47 | cudaFree(d_b); 48 | cudaFree(d_c); 49 | 50 | for (int i = 0; i < numElements; i++) { 51 | printf("Result %d: %d = %d * %d\n", i, c[i], a[i], b[i]); 52 | } 53 | 54 | return 0; 55 | } 56 | -------------------------------------------------------------------------------- /randos/denard.md: -------------------------------------------------------------------------------- 1 | Moore's law talks about exponential growth in the number of transitors 2 | that can be efficiently placed on a chip. 3 | 4 | Denard scaling says that this leads to closer circuits, decreasing 5 | delay (increasing frequency), and decreasing power consumption per 6 | transistor (less resistance). 7 | 8 | Denard scaling broke down because current leakage is apparently a big 9 | problem at very small sizes. More transistors are being placed, but 10 | can't run them faster. 11 | -------------------------------------------------------------------------------- /randos/endianness.md: -------------------------------------------------------------------------------- 1 | Big endian means the most significant byte is stored first. It's 2 | basically how you would write the number in binary. 3 | 4 | In little endian, the least significant byte is stored first. 5 | 6 | There is also the concept of "bit endianness." This doesn't really 7 | exist for CPUs, because there is no concept of bit addressing. All 8 | values that you use are manipulated in terms of byte quantities. So 9 | there's no way to tell what "order" an x86 CPU stores the bits in. It 10 | is transparent. 11 | 12 | But some media have bit serialization. Little endian bit ordering is 13 | used by Ethernet and USB. 14 | -------------------------------------------------------------------------------- /randos/error-correcting-codes/clean-latex: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import sys 4 | sys.path.append("/Users/ruggeri/repos/ruggeri/tech-notes/lib") 5 | 6 | from make_latex import clean 7 | from pathlib import Path 8 | 9 | SOURCE_PATHS = [Path(f) for f in [ 10 | "./error-correcting-codes.tex" 11 | ]] 12 | 13 | for p in SOURCE_PATHS: 14 | clean(p) 15 | -------------------------------------------------------------------------------- /randos/error-correcting-codes/make-latex: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import sys 4 | sys.path.append("/Users/ruggeri/repos/ruggeri/tech-notes/lib") 5 | 6 | from make_latex import run 7 | from pathlib import Path 8 | 9 | SOURCE_PATHS = [Path(f) for f in [ 10 | "./error-correcting-codes.tex" 11 | ]] 12 | 13 | run(SOURCE_PATHS) 14 | -------------------------------------------------------------------------------- /randos/finance/american-depositary-receipt.md: -------------------------------------------------------------------------------- 1 | - Let us say you want to trade shares in a non-US company on a US stock 2 | exchange. 3 | - First, for an American to buy shares in a foreign company at a foreign 4 | exchange, you may have to do cross-currency transactions, which can be 5 | inconvenient. 6 | - Also, there can be all sorts of challenges in your relationship with 7 | the company. How will they remit you dividends? 8 | - So instead, a depositary receipt is traded in US dollars at a US 9 | exchange. 10 | - A US bank issues the ADRs, and corresponding shares are deposited at a 11 | foreign depositary bank. The holder of the ADR may demand the foreign 12 | share, but holding the ADR is typically more convenient. 13 | -------------------------------------------------------------------------------- /randos/finance/eurodollars.md: -------------------------------------------------------------------------------- 1 | A _eurodollar_ is a dollar that is deposited at a bank overseas 2 | (whether in Europe or Japan or wherever). It has nothing to do with 3 | the Euro currency. 4 | 5 | Basically, if an overseas bank accepts US deposits, it doesn't have to 6 | follow the same regulations as US banks. For instance, it's not a 7 | member of the Federal Reserve, so it doesn't have to pay for FDIC 8 | insurance. 9 | 10 | Likewise, the government of France is not going to guarantee US dollar 11 | deposits in a French bank. 12 | 13 | Because there is possibly greater risk, Eurodollars can pay higher 14 | return. (In fact, to be competitive, Eurodollars _must_ pay higher 15 | return, because there is greater risk.) 16 | 17 | Also, there may simply be demand for dollars overseas. If I am 18 | building an apartment building in Kenya, banks might prefer to 19 | denominate a loan in USD because they're worried about inflation of 20 | Kenyan dollar. 21 | 22 | Likewise, individuals overseas may want to have their money in USD 23 | because they want to preserve the value of their currency (and pay 24 | their their dollar denominated debts). 25 | 26 | They say that historically the Eurodollar market started because the 27 | Marshall Plan sent a bunch of USD into Europe. 28 | -------------------------------------------------------------------------------- /randos/finance/inflation-hedges.md: -------------------------------------------------------------------------------- 1 | - TIPS and floating rate bonds can provide some inflation protection. 2 | - However, people often criticize TIPS for undertracking true 3 | inflation. 4 | - Buying a house can be a good hedge, because it should hold its 5 | value. 6 | - Plus the fixed rate mortgage interest payments will depreciate 7 | in value. 8 | - Also, in the short term people may want to push its price up. 9 | - And, more money flowing into the economy can push up asset 10 | prices generally. 11 | - Stocks can be a good inflation hedge. In the short term, companies 12 | that cannot adjust their prices will be hurt by inflation. And 13 | inflation is not positive for the general economy, so stocks might 14 | underperform in the short term. But it's definitely better than cash. 15 | - Gold 16 | - Long-term demand for gold is not changing, and it holds its 17 | value. 18 | - Though, for shorter durations, you can probably buy other 19 | durable goods. 20 | 21 | On the other hand... 22 | 23 | - Bonds, especially long-dated bonds, will be destroyed if inflation 24 | is expected to be high for a long time. If you sell them, people 25 | will only buy for well below par. 26 | - Certificates of deposit will likewise be destroyed. 27 | -------------------------------------------------------------------------------- /randos/finance/revolver.md: -------------------------------------------------------------------------------- 1 | A revolving line of credit is simple. It is like a credit-card. They set 2 | an interest rate and a credit limit. You can draw money from the 3 | account. You have to make minimum payments monthly. 4 | 5 | The alternative is fixed-duration bonds. 6 | 7 | One expects the revolver to have higher interest rates. One also expects 8 | it to be with a lower credit limit, and used primarily to smooth income, 9 | rather than to finance projects. 10 | -------------------------------------------------------------------------------- /randos/fuses.md: -------------------------------------------------------------------------------- 1 | Our fuse box contains six screw-in fuse circuits. Most cartridges in 2 | there are 15A or 20A. 3 | 4 | The original style of fuses were Type T. But 15A, 20A and 30A fuses 5 | could all be installed in the same outlets. That means it is easy to 6 | "overfuse" a circuit: use a fuse that passes more amperage than that 7 | circuit should allow. 8 | 9 | The Type T fuse size is the same Edison thread as lightbulb outlets. 10 | 11 | Type S fuses have different shapes depending for each of the 15A, 20A 12 | and 30A ratings. You're supposed to install - permanently - a Type S 13 | adapter into the panel. The adapter is specific to the appropriate fuse 14 | amperage for the circuit. This adapter will only allow the appropriate 15 | rated fuse to be screwed in. I believe that Type S fuses can only be 16 | installed into a Type S adapter - the gauge of the threads looks totally 17 | different to me. 18 | 19 | Type TL or SL means a "time delayed" fuse (of type T or S as described 20 | above). They will not blow right away if too much current is drawn; they 21 | allow excess current to pass for a brief period. This is necessary often 22 | for connecting motors to power, since they draw large currents on 23 | startup when the motor is not yet rotating. 24 | 25 | I believe that SL fuses are the standard one you want to use most of the 26 | time. 27 | 28 | Source: https://www.pcainspect.com/differences-between-edison-fuse-types/ 29 | -------------------------------------------------------------------------------- /randos/geforce-now.md: -------------------------------------------------------------------------------- 1 | Basically, GeForce Now is a cloud computing service. I believe it runs 2 | Windows, and it has some graphics hardware attached. 3 | 4 | It streams the game to you. You interact through basically a dumb 5 | client. 6 | 7 | Thus, you can play games that aren't available for your operating system 8 | (or even hardware architecture). This is a big deal for me, as a Mac 9 | user. 10 | 11 | The GeForce Now app on your machine is linked with your Steam account. 12 | You have to buy the game on Steam. You can buy a game on Steam for 13 | Windows even if you are using MacOS. In fact, when you buy a game on 14 | Steam, you are allowed to install it via Steam on any architecture. 15 | 16 | When you play a game on GeForce Now, it will tunnel to the GFN server. 17 | There, it will launch your steam account. This will then automatically 18 | launch the game. 19 | 20 | Your ability to use the GFN machine is limited. You don't have shell. 21 | You don't even really have easy access to Steam; the selected game is 22 | auto-launched for you. 23 | 24 | I believe the reason why is that some publishers object to GFN running 25 | their games. I don't know if they could beat GFN in a lawsuit about you 26 | playing a game you bought from them. Maybe there's some shrinkwrap 27 | license on games that says you won't use it in the cloud? I don't know. 28 | 29 | Anyway, GFN will not let you just launch any old game. You can only play 30 | games from publishers they have made deals with. 31 | 32 | As a last note: there are some alternative platforms to Steam, which are 33 | also supported by GFN. These include Epic Games, Ubisoft, and Electronic 34 | Arts. 35 | -------------------------------------------------------------------------------- /randos/http2.md: -------------------------------------------------------------------------------- 1 | Features of HTTP2 include: 2 | 3 | * Data compression of HTTP headers. 4 | * Server push of resources it thinks you need. 5 | * Asking for multiple resources over same connection (pipelining) 6 | * I think HTTP 1.1 also offered this, but no "async" responding. 7 | * That is, the server had to respond in same order as resources 8 | were requested. 9 | * This is an example of head-of-line blocking. 10 | -------------------------------------------------------------------------------- /randos/info-retrieval.md: -------------------------------------------------------------------------------- 1 | * [Modern Information Retrieval](https://www.amazon.com/dp/0321416910) 2 | * One of the most recommended. From 2011. 3 | * Contents: http://people.ischool.berkeley.edu/~hearst/irbook/ 4 | * Seems very weak on distributed in particular. 5 | * [Introduction to Information Retrieval](https://www.amazon.com/dp/0521865719) 6 | * From 2008. Free on Stanford website: https://nlp.stanford.edu/IR-book/ 7 | * Not very much about indexing or distributed indexing IIRC. 8 | * Believe I already looked at this to learn more about postings 9 | lists. 10 | * [Information Retrieval: Implementing and Evaluating Search Engines](https://www.amazon.com/dp/0262528878) 11 | * 2016. Seems promising. 12 | * [Contents](https://mitpress.mit.edu/sites/default/files/titles/content/9780262026512_ind_0001.pdf) 13 | * Deifnitely worth looking at. 14 | * [Search Engines: Information Retrieval in Practice](https://www.amazon.com/dp/0136072240) 15 | * 2008. Seems old; not sure how promising I feel this would be. 16 | * [Information Retrieval: Algorithms and Heuristics](https://www.amazon.com/dp/1402030045) 17 | * 2004. Title seems promising but Springer books are random and 18 | this is very old. 19 | * [Text Data Management and Analysis: A Practical Introduction to Information Retrieval and Text Mining](https://www.amazon.com/dp/197000116X) 20 | * 2016. Wildcard. 21 | * [Contents](http://www.morganclaypoolpublishers.com/catalog_Orig/samples/9781970001174_sample.pdf) 22 | * Basically nothing at all about indexing. Garbage. 23 | -------------------------------------------------------------------------------- /randos/inventions.md: -------------------------------------------------------------------------------- 1 | AM Transmitter 2 | Accelerometer 3 | Air Conditioning 4 | Airplane 5 | Altimeter 6 | Amplifier 7 | Anemometer (airspeed); speedometer 8 | Aquaduct, Dam 9 | Baloon 10 | Barometer 11 | Batteries 12 | Battery 13 | Bicycle 14 | CD, DVD 15 | CPU, Adders, Latches 16 | Calculator 17 | Calculator 18 | Camera, Movie camera, Photography 19 | Cars 20 | Credit card, magnetic strip. QR code 21 | Electric Motor, Combustion engine 22 | Electric guitar 23 | Electron microscope 24 | Fax 25 | Fiber optic cable 26 | Flashlight 27 | Fuel gauge 28 | GPS 29 | Glasses, Telescope 30 | Gyroscopic stabilization 31 | Helicopter 32 | Helicopter, drone. 33 | Inclinometer 34 | Inertial navigation 35 | Internal combustion engine, steam engine 36 | Internet 37 | Inverter 38 | Jet engine 39 | Jet engine, rocket engine. 40 | LCD Screen, CRT 41 | LED 42 | Lasers, bar code. 43 | Light meter 44 | Lock and Key 45 | Locomotive 46 | Loudspeaker 47 | Magnetic Tape (audiotape) 48 | Metal detector 49 | Metalworking, machining 50 | Microphone 51 | Microwave oven 52 | Missle 53 | Morse Code 54 | Pendulum clock 55 | Power network 56 | Quartz clock 57 | Radio 58 | Record player 59 | Remote control 60 | Sattelites, Sattelite communication 61 | Scale 62 | Skyscraper 63 | Solar panel 64 | Sonar, Radar 65 | Steel 66 | Synthesizer 67 | Tachometer 68 | Telegraph, Telephone, Wireless Phone 69 | Television 70 | Thermal imaging 71 | Thermometer 72 | Video game 73 | Voltmeter, ammeter, ohmmeter 74 | World Wide Web 75 | X-ray, ultrasound 76 | laser rangefinder 77 | -------------------------------------------------------------------------------- /randos/latex/latex-for-beginners.md: -------------------------------------------------------------------------------- 1 | **Notes From Latex For Beginners** 2 | 3 | Sections: 4 | 5 | * `\section, \subsection, \subsubsection`... 6 | 7 | They mention using `\label` to label a section. You can then use `\ref` 8 | and `\refpage`. 9 | 10 | Pretty weak resource actually. The Not So Short intro is much more 11 | useful (though also longer). 12 | -------------------------------------------------------------------------------- /randos/latex/latex.md: -------------------------------------------------------------------------------- 1 | TODOS: 2 | 3 | * Latex 2e Unofficial Manual (183 pages) 4 | * IEEEeqnarray docs. 5 | * graphicx 6 | * bibtex 7 | 8 | I think also that a *TeX* resource is also important. Stuff like 9 | `\expandafter` and `\csname` come from TeX, not LaTeX. 10 | 11 | I believe the proper resource for this is the TeX book! 12 | 13 | ## Randos 14 | 15 | * I saw the `\rlap{xyz}` command. It basically puts `xyz` in a 16 | zero-width box, allowing to overlap with whatever is to the right. 17 | This can be helpful for alignment in array-type environments, if you 18 | want some overlap of columns. 19 | * Source: https://tex.stackexchange.com/questions/353588/ieeeeqnarray-with-multiple-alignment-points 20 | -------------------------------------------------------------------------------- /randos/latex/mathmode.md: -------------------------------------------------------------------------------- 1 | **Notes For Math Mode (Herbert Voss)** 2 | 3 | This use to be a good resource (according to some), but it is now 4 | deprecated. It is typeset quite poorly and feels cobbled together. I 5 | don't think it is a good use of my time (especially at 143 pages!). 6 | -------------------------------------------------------------------------------- /randos/latex/unicode-fonts-in-latex.md: -------------------------------------------------------------------------------- 1 | You need to do something like this to have unicode fonts in LaTeX: 2 | 3 | ``` 4 | \usepackage{fontspec} 5 | \usepackage{unicode-math} 6 | \setmathfont[ 7 | Path = /Library/TeX/Root/texmf-dist/fonts/opentype/public/firamath/, 8 | Extension = .otf, 9 | ]{FiraMath-Regular} 10 | ``` 11 | 12 | I think `fontspec` gives you the `setmainfont` and `setmathfont` 13 | macros. You then have to give it a path to the font directory. You 14 | give the extension type, and the name of the font. 15 | 16 | I think `unicode-math` in theory allows you to use unicode characters 17 | in math mode. But I think that `FiraMath` doesn't actually support 18 | unicode characters? Not sure. 19 | 20 | I think you need to build with `xelatex` else `latex` will freak out 21 | when it sees unicode characters. 22 | 23 | That's all I know for now. 24 | -------------------------------------------------------------------------------- /randos/lockpicking/clean-latex: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import sys 4 | sys.path.append("/Users/ruggeri/repos/ruggeri/tech-notes/lib") 5 | 6 | from make_latex import clean 7 | from pathlib import Path 8 | 9 | SOURCE_PATHS = [Path(f) for f in [ 10 | "./lockpicking.tex" 11 | ]] 12 | 13 | for p in SOURCE_PATHS: 14 | clean(p) 15 | -------------------------------------------------------------------------------- /randos/lockpicking/make-latex: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import sys 4 | sys.path.append("/Users/ruggeri/repos/ruggeri/tech-notes/lib") 5 | 6 | from make_latex import run 7 | from pathlib import Path 8 | 9 | SOURCE_PATHS = [Path(f) for f in [ 10 | "./lockpicking.tex" 11 | ]] 12 | 13 | run(SOURCE_PATHS) 14 | -------------------------------------------------------------------------------- /randos/make-electronics/clean-latex: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import sys 4 | sys.path.append("/Users/ruggeri/repos/ruggeri/tech-notes/lib") 5 | 6 | from make_latex import clean 7 | from pathlib import Path 8 | 9 | SOURCE_PATHS = [Path(f) for f in [ 10 | "./make-electronics.tex" 11 | ]] 12 | 13 | for p in SOURCE_PATHS: 14 | clean(p) 15 | -------------------------------------------------------------------------------- /randos/make-electronics/make-latex: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import sys 4 | sys.path.append("/Users/ruggeri/repos/ruggeri/tech-notes/lib") 5 | 6 | from make_latex import run 7 | from pathlib import Path 8 | 9 | SOURCE_PATHS = [Path(f) for f in [ 10 | "./make-electronics.tex" 11 | ]] 12 | 13 | run(SOURCE_PATHS) 14 | -------------------------------------------------------------------------------- /randos/maybe-more-books.md: -------------------------------------------------------------------------------- 1 | # Manning 2 | 3 | Akka in Action 4 | *Arduino in Action* 5 | D3.js in Action 6 | Erlang and OTP in Action 7 | Front-End Tooling with Gulp, Bower, and Yeoman 8 | Getting MEAN with Mongo, Express, Angular, and Node 9 | HBase in Action 10 | Oculus Rift in Action 11 | **OpenCL in Action** 12 | Play for Scala 13 | *RxJS in Action* 14 | *Taming Text* (has stuff on search, string matching) 15 | Unity in Action 16 | 17 | # Pragmatic 18 | 19 | None are interesting. What a bunch of trash. 20 | 21 | # No Starch Press 22 | 23 | Arduino Playground 24 | Arduino Project Handbook 25 | Arduino Workshop 26 | Snip, Burn, Solder, Shred 27 | 28 | ## O'Reilly 29 | 30 | Too many to keep track of. 31 | 32 | * Designing Data-Intensive Applications 33 | * By the guy who wrote that big long Samza thing. Could be 34 | interesting. 35 | * Not done yet! 36 | 37 | # Other 38 | 39 | * Apress (I find they mostly suck) 40 | -------------------------------------------------------------------------------- /randos/music-theory/clean-latex: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import sys 4 | sys.path.append("/Users/ruggeri/repos/ruggeri/tech-notes/lib") 5 | 6 | from make_latex import clean 7 | from pathlib import Path 8 | 9 | SOURCE_PATHS = [Path(f) for f in [ 10 | "./music-theory.tex" 11 | ]] 12 | 13 | for p in SOURCE_PATHS: 14 | clean(p) 15 | -------------------------------------------------------------------------------- /randos/music-theory/make-latex: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import sys 4 | sys.path.append("/Users/ruggeri/repos/ruggeri/tech-notes/lib") 5 | 6 | from make_latex import run 7 | from pathlib import Path 8 | 9 | SOURCE_PATHS = [Path(f) for f in [ 10 | "./music-theory.tex" 11 | ]] 12 | 13 | run(SOURCE_PATHS) 14 | -------------------------------------------------------------------------------- /randos/ned-electronics/clean-latex: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import sys 4 | sys.path.append("/Users/ruggeri/repos/ruggeri/tech-notes/lib") 5 | 6 | from make_latex import clean 7 | from pathlib import Path 8 | 9 | SOURCE_PATHS = [Path(f) for f in [ 10 | "./ned-electronics.tex" 11 | ]] 12 | 13 | for p in SOURCE_PATHS: 14 | clean(p) 15 | -------------------------------------------------------------------------------- /randos/ned-electronics/make-latex: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import sys 4 | sys.path.append("/Users/ruggeri/repos/ruggeri/tech-notes/lib") 5 | 6 | from make_latex import run 7 | from pathlib import Path 8 | 9 | SOURCE_PATHS = [Path(f) for f in [ 10 | "./ned-electronics.tex" 11 | ]] 12 | 13 | run(SOURCE_PATHS) 14 | -------------------------------------------------------------------------------- /randos/online-electircal-engineering-courses.md: -------------------------------------------------------------------------------- 1 | https://www.edx.org/course/circuits-and-electronics-1-basic-circuit-analysi-2 2 | https://www.edx.org/course/circuits-and-electronics-2-amplification-speed-a-2 3 | https://www.edx.org/course/circuits-and-electronics-3-applications-2 4 | 5 | https://www.coursera.org/learn/linear-circuits-dcanalysis 6 | https://www.coursera.org/learn/linear-circuits-ac-analysis 7 | 8 | https://www.coursera.org/learn/power-electronics?specialization=power-electronics 9 | https://www.coursera.org/learn/converter-circuits?specialization=power-electronics 10 | https://www.coursera.org/learn/converter-control?specialization=power-electronics 11 | https://www.coursera.org/learn/magnetics-for-power-electronic-converters-v2?specialization=power-electronics 12 | 13 | https://www.coursera.org/learn/dsp1?specialization=digital-signal-processing 14 | https://www.coursera.org/learn/dsp2?specialization=digital-signal-processing 15 | https://www.coursera.org/learn/dsp3?specialization=digital-signal-processing 16 | https://www.coursera.org/learn/dsp4?specialization=digital-signal-processing 17 | 18 | https://www.coursera.org/specializations/semiconductor-devices#courses 19 | -------------------------------------------------------------------------------- /randos/papers/bigtable.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ruggeri/tech-notes/bd3981d766c26105c82bc55ea05fca0dacdb1b42/randos/papers/bigtable.pdf -------------------------------------------------------------------------------- /randos/papers/chubby.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ruggeri/tech-notes/bd3981d766c26105c82bc55ea05fca0dacdb1b42/randos/papers/chubby.pdf -------------------------------------------------------------------------------- /randos/papers/dynamo.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ruggeri/tech-notes/bd3981d766c26105c82bc55ea05fca0dacdb1b42/randos/papers/dynamo.pdf -------------------------------------------------------------------------------- /randos/papers/f1.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ruggeri/tech-notes/bd3981d766c26105c82bc55ea05fca0dacdb1b42/randos/papers/f1.pdf -------------------------------------------------------------------------------- /randos/papers/gfs.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ruggeri/tech-notes/bd3981d766c26105c82bc55ea05fca0dacdb1b42/randos/papers/gfs.pdf -------------------------------------------------------------------------------- /randos/papers/mapreduce.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ruggeri/tech-notes/bd3981d766c26105c82bc55ea05fca0dacdb1b42/randos/papers/mapreduce.pdf -------------------------------------------------------------------------------- /randos/papers/megastore.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ruggeri/tech-notes/bd3981d766c26105c82bc55ea05fca0dacdb1b42/randos/papers/megastore.pdf -------------------------------------------------------------------------------- /randos/papers/percolator.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ruggeri/tech-notes/bd3981d766c26105c82bc55ea05fca0dacdb1b42/randos/papers/percolator.pdf -------------------------------------------------------------------------------- /randos/papers/spanner.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ruggeri/tech-notes/bd3981d766c26105c82bc55ea05fca0dacdb1b42/randos/papers/spanner.pdf -------------------------------------------------------------------------------- /randos/physics/feynman/01-atoms-in-motion/01-atoms-in-motion.tex: -------------------------------------------------------------------------------- 1 | \def\chapterbody{ 2 | \input{./01a-atoms-in-motion.tex} 3 | } 4 | 5 | \ifx\bookmode\undefined 6 | \input{../common/preamble.tex} 7 | 8 | \setcounter{section}{0} 9 | 10 | \begin{document} 11 | \chapterbody 12 | \end{document} 13 | \else 14 | \chapterbody 15 | \fi 16 | -------------------------------------------------------------------------------- /randos/physics/feynman/02-basic-physics/02-basic-physics.tex: -------------------------------------------------------------------------------- 1 | \def\chapterbody{ 2 | \input{./02a-basic-physics.tex} 3 | } 4 | 5 | \ifx\bookmode\undefined 6 | \input{../common/preamble.tex} 7 | 8 | \setcounter{section}{1} 9 | 10 | \begin{document} 11 | \chapterbody 12 | \end{document} 13 | \else 14 | \chapterbody 15 | \fi 16 | -------------------------------------------------------------------------------- /randos/physics/feynman/03-the-relation-of-physics-to-other-sciences/03-the-relation-of-physics-to-other-sciences.tex: -------------------------------------------------------------------------------- 1 | \def\chapterbody{ 2 | \input{./03a-the-relation-of-physics-to-other-sciences.tex} 3 | } 4 | 5 | \ifx\bookmode\undefined 6 | \input{../common/preamble.tex} 7 | 8 | \setcounter{section}{2} 9 | 10 | \begin{document} 11 | \chapterbody 12 | \end{document} 13 | \else 14 | \chapterbody 15 | \fi 16 | -------------------------------------------------------------------------------- /randos/physics/feynman/03-the-relation-of-physics-to-other-sciences/03a-the-relation-of-physics-to-other-sciences.tex: -------------------------------------------------------------------------------- 1 | \section{The Relation of Physics to Other Sciences} 2 | 3 | \begin{enumerate} 4 | \item This chapter just talks about how every science can reduce to 5 | physics. 6 | 7 | \item He takes the time to shit on psychology though! 8 | \end{enumerate} 9 | -------------------------------------------------------------------------------- /randos/physics/feynman/04-conservation-of-energy/04-conservation-of-energy.tex: -------------------------------------------------------------------------------- 1 | \def\chapterbody{ 2 | \input{04a-conservation-of-energy.tex} 3 | } 4 | 5 | \ifx\bookmode\undefined 6 | \input{../common/preamble.tex} 7 | 8 | \setcounter{section}{3} 9 | 10 | \begin{document} 11 | \chapterbody 12 | \end{document} 13 | \else 14 | \chapterbody 15 | \fi 16 | 17 | -------------------------------------------------------------------------------- /randos/physics/feynman/05-time-and-distance/05-time-and-distance.tex: -------------------------------------------------------------------------------- 1 | \def\chapterbody{ 2 | \input{05a-time-and-distance.tex} 3 | } 4 | 5 | \ifx\bookmode\undefined 6 | \input{../common/preamble.tex} 7 | 8 | \setcounter{section}{4} 9 | 10 | \begin{document} 11 | \chapterbody 12 | \end{document} 13 | \else 14 | \chapterbody 15 | \fi 16 | 17 | -------------------------------------------------------------------------------- /randos/physics/feynman/05-time-and-distance/05a-time-and-distance.tex: -------------------------------------------------------------------------------- 1 | \section{Time and Distance} 2 | 3 | \begin{enumerate} 4 | \item He mentions how to ``define'' time in terms of measuring it. For 5 | instance: you can say time is defined by how long it takes an 6 | hourglass to drain its sand. 7 | 8 | \item He mentions that we use pendulums to measure time in a 9 | grandfather clock. We can also use electrical oscillators. He mentions 10 | that below a certain limit, the concept of time may make no sense in 11 | the sense that it is impossible to measure it. 12 | 13 | \item He mentions radiocarbon dating. 14 | 15 | \item Next he discusses distance. He discusses direct measurement, 16 | using triangulation, using radar (which involves measuring time and 17 | assuming a velocity of the EM wave). 18 | 19 | \item He mentions how you might measure the \emph{cross sectional 20 | area} of a nucleus. You send a beam of particles through a material, 21 | and see how many get through. You have to do a little math because 22 | even a thin material will have many layers of atoms. The more layers, 23 | of course the more absorbtion. But because nucleii are spherical, 24 | there is a formula. (BTW I think this is what Fermi did.) 25 | 26 | \item He finally notes that measurements of distance and time will be 27 | \emph{relative} to the motion of an observer. 28 | 29 | \item And also: the uncertainty principle has already told us about a 30 | relationship between certainty in distance and certainty in momentum. 31 | It turns out this has implications for our ability to reliably measure 32 | time. 33 | \end{enumerate} 34 | -------------------------------------------------------------------------------- /randos/physics/feynman/06-probability/06-probability.tex: -------------------------------------------------------------------------------- 1 | \def\chapterbody{ 2 | \input{06a-probability.tex} 3 | } 4 | 5 | \ifx\bookmode\undefined 6 | \input{../common/preamble.tex} 7 | 8 | \setcounter{section}{5} 9 | 10 | \begin{document} 11 | \chapterbody 12 | \end{document} 13 | \else 14 | \chapterbody 15 | \fi 16 | 17 | -------------------------------------------------------------------------------- /randos/physics/feynman/07-the-theory-of-gravitation/07-the-theory-of-gravitation.tex: -------------------------------------------------------------------------------- 1 | \def\chapterbody{ 2 | \input{07a-the-theory-of-gravitation.tex} 3 | } 4 | 5 | \ifx\bookmode\undefined 6 | \input{../common/preamble.tex} 7 | 8 | \setcounter{section}{6} 9 | 10 | \begin{document} 11 | \chapterbody 12 | \end{document} 13 | \else 14 | \chapterbody 15 | \fi 16 | 17 | -------------------------------------------------------------------------------- /randos/physics/feynman/08-motion/08-motion.tex: -------------------------------------------------------------------------------- 1 | \def\chapterbody{ 2 | \input{08a-motion.tex} 3 | } 4 | 5 | \ifx\bookmode\undefined 6 | \input{../common/preamble.tex} 7 | 8 | \setcounter{section}{8} 9 | 10 | \begin{document} 11 | \chapterbody 12 | \end{document} 13 | \else 14 | \chapterbody 15 | \fi 16 | 17 | -------------------------------------------------------------------------------- /randos/physics/feynman/08-motion/08a-motion.tex: -------------------------------------------------------------------------------- 1 | \section{Motion} 2 | 3 | \begin{enumerate} 4 | \item Defines speed and acceleration as derivatives. Shows how to 5 | integrate speed to find distance. 6 | 7 | \item Discuses the two dimensional and three dimensional versions of 8 | speed/velocity. 9 | \end{enumerate} 10 | -------------------------------------------------------------------------------- /randos/physics/feynman/09-newtons-laws-of-dynamics/09-newtons-laws-of-dynamics.tex: -------------------------------------------------------------------------------- 1 | \def\chapterbody{ 2 | \input{09a-newtons-laws-of-dynamics.tex} 3 | } 4 | 5 | \ifx\bookmode\undefined 6 | \input{../common/preamble.tex} 7 | 8 | \setcounter{section}{8} 9 | 10 | \begin{document} 11 | \chapterbody 12 | \end{document} 13 | \else 14 | \chapterbody 15 | \fi 16 | 17 | -------------------------------------------------------------------------------- /randos/physics/feynman/10-conservation-of-momentum/10-conservation-of-momentum.tex: -------------------------------------------------------------------------------- 1 | \def\chapterbody{ 2 | \input{10a-conservation-of-momentum.tex} 3 | } 4 | 5 | \ifx\bookmode\undefined 6 | \input{../common/preamble.tex} 7 | 8 | \setcounter{section}{9} 9 | 10 | \begin{document} 11 | \chapterbody 12 | \end{document} 13 | \else 14 | \chapterbody 15 | \fi 16 | 17 | -------------------------------------------------------------------------------- /randos/physics/feynman/11-vectors/11-vectors.tex: -------------------------------------------------------------------------------- 1 | \def\chapterbody{ 2 | \input{11a-vectors.tex} 3 | } 4 | 5 | \ifx\bookmode\undefined 6 | \input{../common/preamble.tex} 7 | 8 | \setcounter{section}{10} 9 | 10 | \begin{document} 11 | \chapterbody 12 | \end{document} 13 | \else 14 | \chapterbody 15 | \fi 16 | 17 | -------------------------------------------------------------------------------- /randos/physics/feynman/11-vectors/11a-vectors.tex: -------------------------------------------------------------------------------- 1 | \section{Vectors} 2 | 3 | \begin{enumerate} 4 | 5 | \item He shows that if forces are invariant under 6 | translation/rotation, then acceleration is also. This is a simple 7 | mathematical fact. 8 | 9 | \item He doesn't show that forces like gravity are equal under 10 | translation/rotation, but that would be easy to show. 11 | 12 | \item He then introduces vector notation. 13 | 14 | \item He explains how derivatives generalize to vector valued 15 | functions. I don't think he ever calculates a gradient, though. He's 16 | only using derivatives are only wrt time so far? 17 | 18 | \item He explains what the norm is. 19 | 20 | \item He explains the dot product. 21 | 22 | \end{enumerate} 23 | -------------------------------------------------------------------------------- /randos/physics/feynman/12-characteristics-of-force/12-characteristics-of-force.tex: -------------------------------------------------------------------------------- 1 | \def\chapterbody{ 2 | \input{12a-characteristics-of-force.tex} 3 | } 4 | 5 | \ifx\bookmode\undefined 6 | \input{../common/preamble.tex} 7 | 8 | \setcounter{section}{11} 9 | 10 | \begin{document} 11 | \chapterbody 12 | \end{document} 13 | \else 14 | \chapterbody 15 | \fi 16 | 17 | -------------------------------------------------------------------------------- /randos/physics/feynman/13-work-and-potential-energy-a/13-work-and-potential-energy-a.tex: -------------------------------------------------------------------------------- 1 | \def\chapterbody{ 2 | \input{13a-work-and-potential-energy-a.tex} 3 | } 4 | 5 | \ifx\bookmode\undefined 6 | \input{../common/preamble.tex} 7 | 8 | \setcounter{section}{12} 9 | 10 | \begin{document} 11 | \chapterbody 12 | \end{document} 13 | \else 14 | \chapterbody 15 | \fi 16 | 17 | -------------------------------------------------------------------------------- /randos/physics/feynman/14-work-and-potential-energy-conclusion/14-work-and-potential-energy-conclusion.tex: -------------------------------------------------------------------------------- 1 | \def\chapterbody{ 2 | \input{14a-work-and-potential-energy-conclusion.tex} 3 | } 4 | 5 | \ifx\bookmode\undefined 6 | \input{../common/preamble.tex} 7 | 8 | \setcounter{section}{13} 9 | 10 | \begin{document} 11 | \chapterbody 12 | \end{document} 13 | \else 14 | \chapterbody 15 | \fi 16 | 17 | -------------------------------------------------------------------------------- /randos/physics/feynman/15-the-special-theory-of-relativity/15-the-special-theory-of-relativity.tex: -------------------------------------------------------------------------------- 1 | \def\chapterbody{ 2 | \section{The Special Theory of Relativity} 3 | \input{15a-principles.tex} 4 | \input{15b-michelson-morley.tex} 5 | \input{15c-lorentzian-transformation.tex} 6 | \input{15d-time-dilation.tex} 7 | \input{15e-simultaneity.tex} 8 | \input{15f-space-time.tex} 9 | \input{15g-relativistic-mass.tex} 10 | \input{15h-relativistic-energy.tex} 11 | \input{15i-pedagogical-note.tex} 12 | } 13 | 14 | \ifx\bookmode\undefined 15 | \input{../common/preamble.tex} 16 | 17 | \setcounter{section}{14} 18 | 19 | \begin{document} 20 | \chapterbody 21 | \end{document} 22 | \else 23 | \chapterbody 24 | \fi 25 | -------------------------------------------------------------------------------- /randos/physics/feynman/15-the-special-theory-of-relativity/15f-space-time.tex: -------------------------------------------------------------------------------- 1 | \subsection{Space Time} 2 | 3 | \begin{enumerate} 4 | 5 | \item We may ask: isn't it weird that the Lorentzian transform turns 6 | out to be the ``right one?'' Why not the Galilean transformation? It 7 | feels obvious that this should be the right one. 8 | 9 | This relates to our understanding of space and time. I think it is 10 | Minkowski who introduced the concept of ``space-time.'' When you are 11 | working in this concept space, the Lorentzian transformation becomes 12 | the natural one. I think. I don't believe Feynman fully discusses this 13 | in this chapter yet. 14 | 15 | \item It is noted that if you treat $(x, y, z, ct)$ as a four vector 16 | (or even just $(x, ct)$ as space time with one space dimension), you 17 | should find that $(x', ct')$ is simply a rotation of the original 18 | coordinates. I checked this but don't bother to repeat the math here. 19 | 20 | \end{enumerate} 21 | -------------------------------------------------------------------------------- /randos/physics/feynman/16-relativistic-energy-and-momentum/16-relativistic-energy-and-momentum.tex: -------------------------------------------------------------------------------- 1 | \def\chapterbody{ 2 | \section{Relativistic Energy and Momentum} 3 | 4 | \input{16a-relativity-and-accelerations} 5 | \input{16b-transformation-of-velocities} 6 | \input{16c-derivation-of-relativistic-mass} 7 | \input{16d-relativistic-energy} 8 | \input{16e-potential-energy} 9 | 10 | \subsection{TODOs} 11 | \begin{enumerate} 12 | \item \TODO{Prove using derivatives/calculus how mass must vary with 13 | velocity. I think there must be a more straightforward way than 14 | Feynman has done.} 15 | 16 | \item \TODO{Prove using derivatives/calculus the mass-energy 17 | equivalence. Feynman considered a gas and an approximation using the 18 | binomial theorem. Can we do something exact?} 19 | 20 | \item \TODO{Is rest mass real? Or is it just energy? Is potential 21 | energy real? Or is it just kinetic energy?} 22 | \end{enumerate} 23 | } 24 | 25 | \ifx\bookmode\undefined 26 | \input{../common/preamble.tex} 27 | 28 | \setcounter{section}{15} 29 | 30 | \begin{document} 31 | \chapterbody 32 | \end{document} 33 | \else 34 | \chapterbody 35 | \fi 36 | -------------------------------------------------------------------------------- /randos/physics/feynman/16-relativistic-energy-and-momentum/16a-relativity-and-accelerations.tex: -------------------------------------------------------------------------------- 1 | \subsection{Relativity and Accelerations} 2 | 3 | \begin{enumerate} 4 | 5 | \item Feynman notes a potential paradox. Imagine Alice and Bob. Bob 6 | flies away, and returns. He is younger than Alice now. 7 | 8 | \item But from Bob's perspective, wasn't \emph{Alice} the one who flew 9 | away? But she came back older. Doesn't this violate symmetry? 10 | 11 | \item The solution is simple. Bob cannot imagine that Alice flew away, 12 | because it was Bob who experienced accelerations. Even if he was born 13 | on the spaceship, Bob needs to undergo forces to turn back around to 14 | get to Alice. This breaks the symmetry and explains the difference. 15 | 16 | \item BTW, I give a complete explanation in the next chapter. It is 17 | not immediately clear why the accelerations matter. Spoiler: what 18 | matters is that Bob has taken a curved path through space-time, while 19 | Alice has taken a straight-line path. 20 | 21 | \end{enumerate} 22 | -------------------------------------------------------------------------------- /randos/physics/feynman/17-space-time/17-space-time.tex: -------------------------------------------------------------------------------- 1 | \def\chapterbody{ 2 | \section{Space-Time} 3 | 4 | \input{17a-minkowski-space.tex} 5 | \input{17b-four-momentum.tex} 6 | \input{17c-photons.tex} 7 | \input{17d-proper-time.tex} 8 | 9 | \subsection{TODOs} 10 | \begin{enumerate} 11 | \item \TODO{Should prove contravariance of four-momentum wrt Lorentz 12 | transformations.} 13 | \end{enumerate} 14 | } 15 | 16 | \ifx\bookmode\undefined 17 | \input{../common/preamble.tex} 18 | 19 | \setcounter{section}{16} 20 | 21 | \begin{document} 22 | \chapterbody 23 | \end{document} 24 | \else 25 | \chapterbody 26 | \fi 27 | -------------------------------------------------------------------------------- /randos/physics/feynman/17-space-time/17c-photons.tex: -------------------------------------------------------------------------------- 1 | \subsection{Photons} 2 | 3 | \begin{enumerate} 4 | 5 | \item Last, Feynman discusses a little about photons. Photons travel 6 | at $c$ in every rest frame. To prove this, remember our formula for 7 | relativistic change in velocity: 8 | 9 | \begin{nedqn} 10 | c' 11 | \eqcol 12 | \frac{ 13 | c - u 14 | }{ 15 | 1 - uc / c^2 16 | } 17 | \\ 18 | \eqcol 19 | \frac{ 20 | c - u 21 | }{ 22 | c \parens{c - u} 23 | } 24 | \\ 25 | \eqcol 26 | c 27 | \end{nedqn} 28 | 29 | \item Photons have zero rest mass. Feynman says photons have energy $E 30 | = \hslash f$. I'm not sure what his units are here. 31 | 32 | \item He also notes that the momentum of a photon is $p = \hslash f / 33 | c$. 34 | 35 | \item He notes that if you are assuming $c = 1$, then energy and 36 | momentum are equal. Again: I'm not sure what the units are here... 37 | 38 | \item By possessing momentum, this implies that photons have 39 | (relativistic) mass. Moreover, by saying energy equals momentum, it 40 | implies the rest mass of a photon must be zero. 41 | 42 | \end{enumerate} 43 | -------------------------------------------------------------------------------- /randos/physics/feynman/18-rotation-in-two-dimensions/18-rotation-in-two-dimensions.tex: -------------------------------------------------------------------------------- 1 | % TODO: probably want to get rid of these! 2 | \newcommand{\atan}{a^{\text{TAN}}} 3 | \newcommand{\Ftan}{F^{\text{TAN}}} 4 | \newcommand{\vtan}{v^{\text{TAN}}} 5 | \newcommand{\arad}{a^{\text{RAD}}} 6 | \newcommand{\Frad}{F^{\text{RAD}}} 7 | \newcommand{\vrad}{v^{\text{RAD}}} 8 | 9 | \def\chapterbody{ 10 | \section{Rotation In Two Dimensions} 11 | 12 | \input{18a-rigid-bodies.tex} 13 | \input{18b-total-translational-force.tex} 14 | \input{18c-rotation-and-torque.tex} 15 | \input{18d-conservation-of-angular-momentum.tex} 16 | \input{18e-total-rotational-inertia.tex} 17 | } 18 | 19 | \ifx\bookmode\undefined 20 | \input{../common/preamble.tex} 21 | 22 | \setcounter{section}{17} 23 | 24 | \begin{document} 25 | \chapterbody 26 | \end{document} 27 | \else 28 | \chapterbody 29 | \fi 30 | -------------------------------------------------------------------------------- /randos/physics/feynman/18-rotation-in-two-dimensions/18-xx-lever-arm-diagram.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ruggeri/tech-notes/bd3981d766c26105c82bc55ea05fca0dacdb1b42/randos/physics/feynman/18-rotation-in-two-dimensions/18-xx-lever-arm-diagram.jpeg -------------------------------------------------------------------------------- /randos/physics/feynman/18-rotation-in-two-dimensions/18a-rigid-bodies.tex: -------------------------------------------------------------------------------- 1 | \subsection{Rigid Bodies} 2 | 3 | \begin{enumerate} 4 | \item We're going to explore what happens when \define{external} 5 | forces are applied to an object (or \define{system}). Within the 6 | system, \define{internal} forces between subparts of the system may 7 | develop. We hope that we will be able to \emph{ignore} the effect of 8 | the internal forces. 9 | 10 | \item Two extreme systems are (1) a system where there are \emph{no} 11 | interaction effects (totally ``non-rigid''), and (2) a system which is 12 | a \define{rigid body}: every point of the object remains at a fixed 13 | and unvarying distance to every other point of the object. 14 | 15 | \item What kind of transformations can a rigid body undergo? These 16 | transformations must be \define{isometries} of the object. 17 | 18 | \item We know that translating all points in the object does not 19 | deform it. We also know that rotation about an axis does not deform 20 | the object. 21 | 22 | \item Mirroring is also an isometry, but it is not possible to 23 | continuously perform on a rigid body. So we are restricted to 24 | roto-translations. 25 | \end{enumerate} 26 | -------------------------------------------------------------------------------- /randos/physics/feynman/19-center-of-mass-moment-of-inertia/19-center-of-mass-moment-of-inertia.tex: -------------------------------------------------------------------------------- 1 | \def\chapterbody{ 2 | \section{Center of Mass; Moment of Inertia} 3 | 4 | \input{19a-properties-of-com.tex} 5 | \input{19b-center-of-gravity.tex} 6 | \input{19c-com-and-inertia-calculations.tex} 7 | \input{19d-rotational-kinetic-energy.tex} 8 | \input{19e-coriolis-force-for-radial-travel.tex} 9 | \input{19f-coriolis-force-for-tangential-travel.tex} 10 | } 11 | 12 | \ifx\bookmode\undefined 13 | \input{../common/preamble.tex} 14 | 15 | \setcounter{section}{18} 16 | 17 | \begin{document} 18 | \chapterbody 19 | \end{document} 20 | \else 21 | \chapterbody 22 | \fi 23 | -------------------------------------------------------------------------------- /randos/physics/feynman/20-rotation-in-space/20-rotation-in-space.tex: -------------------------------------------------------------------------------- 1 | \def\chapterbody{ 2 | \section{Rotation in Space} 3 | 4 | \input{20a-rotation-in-space.tex} 5 | } 6 | 7 | \ifx\bookmode\undefined 8 | \input{../common/preamble.tex} 9 | 10 | \setcounter{section}{19} 11 | 12 | \begin{document} 13 | \chapterbody 14 | \end{document} 15 | \else 16 | \chapterbody 17 | \fi 18 | -------------------------------------------------------------------------------- /randos/physics/feynman/21-the-harmonic-oscillator/21-the-harmonic-oscillator.tex: -------------------------------------------------------------------------------- 1 | \def\chapterbody{ 2 | \section{The Harmonic Oscillator} 3 | 4 | \input{21a-the-harmonic-oscillator.tex} 5 | } 6 | 7 | \ifx\bookmode\undefined 8 | \input{../common/preamble.tex} 9 | 10 | \setcounter{section}{20} 11 | 12 | \begin{document} 13 | \chapterbody 14 | \end{document} 15 | \else 16 | \chapterbody 17 | \fi 18 | -------------------------------------------------------------------------------- /randos/physics/feynman/39-the-kinetic-theory-of-gases/39-the-kinetic-theory-of-gases.tex: -------------------------------------------------------------------------------- 1 | \def\chapterbody{ 2 | \section{The Kinetic Theory of Gases} 3 | 4 | \input{39a-the-kinetic-theory-of-gases.tex} 5 | } 6 | 7 | \ifx\bookmode\undefined 8 | \input{../common/preamble.tex} 9 | 10 | \setcounter{section}{38} 11 | 12 | \begin{document} 13 | \chapterbody 14 | \end{document} 15 | \else 16 | \chapterbody 17 | \fi 18 | -------------------------------------------------------------------------------- /randos/physics/feynman/39-the-kinetic-theory-of-gases/39a-the-kinetic-theory-of-gases.tex: -------------------------------------------------------------------------------- 1 | \subsection{Mass On a Spring} 2 | 3 | \begin{enumerate} 4 | \item Test 5 | \end{enumerate} 6 | -------------------------------------------------------------------------------- /randos/physics/feynman/clean-latex: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | # Clean all latex tmp files. 4 | gfind . -regex ".*\\.\\(aux\|fdb_latexmk\|fls\|log\|gz\|out\\)$" | parallel rm 5 | -------------------------------------------------------------------------------- /randos/physics/feynman/common/preamble.tex: -------------------------------------------------------------------------------- 1 | \documentclass[11pt, oneside]{amsart} 2 | 3 | \usepackage{geometry} 4 | \geometry{letterpaper} 5 | 6 | \usepackage{ned-common} 7 | \usepackage{ned-calculus} 8 | \usepackage{ned-linear-algebra} 9 | \usepackage{ned-stats} 10 | 11 | \usepackage[version=4]{mhchem} 12 | \usepackage[free-standing-units=true]{siunitx} 13 | 14 | % newcommands used across sections 15 | % TODO: should these move out to my common libs? 16 | 17 | \newcommand{\CoM}{\text{CoM}} 18 | -------------------------------------------------------------------------------- /randos/physics/mechanics/clean-latex: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import sys 4 | sys.path.append("/Users/ruggeri/repos/ruggeri/tech-notes/lib") 5 | 6 | from make_latex import clean 7 | from pathlib import Path 8 | 9 | SOURCE_PATHS = [Path(f) for f in [ 10 | "./mechanics.tex" 11 | ]] 12 | 13 | for p in SOURCE_PATHS: 14 | clean(p) 15 | -------------------------------------------------------------------------------- /randos/physics/mechanics/make-latex: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import sys 4 | sys.path.append("/Users/ruggeri/repos/ruggeri/tech-notes/lib") 5 | 6 | from make_latex import run 7 | from pathlib import Path 8 | 9 | SOURCE_PATHS = [Path(f) for f in [ 10 | "./mechanics.tex" 11 | ]] 12 | 13 | run(SOURCE_PATHS) 14 | -------------------------------------------------------------------------------- /randos/physics/mechanics/mechanics.tex: -------------------------------------------------------------------------------- 1 | \documentclass[11pt, oneside]{amsart} 2 | 3 | \usepackage{geometry} 4 | \geometry{letterpaper} 5 | 6 | \usepackage{ned-common} 7 | \usepackage{ned-calculus} 8 | \usepackage{ned-linear-algebra} 9 | 10 | % TODO: Move these definitions elsewhere? 11 | \usepackage[free-standing-units=true]{siunitx} 12 | \newcommand{\mps}{\meter / \s} 13 | \newcommand{\mpss}{\meter / \s^2} 14 | 15 | \begin{document} 16 | 17 | \title{Introductory Mechanics} 18 | \maketitle 19 | 20 | \input{00-gravitation} 21 | \input{01-momentum} 22 | \input{02-energy} 23 | 24 | % Tension? 25 | 26 | % Energy? 27 | 28 | \end{document} 29 | -------------------------------------------------------------------------------- /randos/physics/power-inverter.md: -------------------------------------------------------------------------------- 1 | Goal is to convert DC source to AC. 2 | 3 | A rough first approximation of AC can be performed by flipping the 4 | orientation of a battery 120 times per second (60Hz). 5 | 6 | Once you have alternating current, of course you can step up the voltage 7 | to 120V or whatever you want using a transformer. 8 | 9 | You are going to need some kind of timing circuit to electronically 10 | control the switching, presumably using transistors as switches. 11 | 12 | How are you going to smooth the output? One first way is to slice up a 13 | cycle into time segments, and close the circuit during a segment in 14 | proportion to the expected voltage at the midpoint of the segment. 15 | 16 | Assuming a constant load, the work performed over each time slice would 17 | vary in proportion to the voltage at the midpoint of the timeslice. The 18 | average power delivered is correct during each time slice, even though 19 | the voltage is never really correct at all. 20 | 21 | I bet you could further smooth things out with a capacitor... 22 | 23 | They also show how you can build a more sophisticated version that 24 | converts from DC to three-phase. That's overkill for my needs right now. 25 | 26 | Source: https://www.youtube.com/watch?v=iIqhAX0I7lI 27 | -------------------------------------------------------------------------------- /randos/quantum-tex/04-epr-pair-production.tex: -------------------------------------------------------------------------------- 1 | \section{EPR Pair Production} 2 | 3 | \begin{example} 4 | To produce an entangled EPR pair, start with $\braket{00}$. Then apply 5 | $\mtxH$ to the first qubit to get a superposition: 6 | 7 | \begin{nedqn} 8 | \parens{\sqtot} \parens{\braket{0} + \braket{1}} \otimes \braket{0} 9 | \end{nedqn} 10 | 11 | \noindent 12 | Next, do a CNOT operation from the first qubit to the second qubit. 13 | This is the part that entangles them. You get: 14 | 15 | \begin{nedqn} 16 | \parens{\sqtot} \parens{\braket{00} + \braket{11}} 17 | \end{nedqn} 18 | 19 | \noindent 20 | Done! 21 | \end{example} 22 | 23 | -------------------------------------------------------------------------------- /randos/quantum-tex/05-no-cloning.tex: -------------------------------------------------------------------------------- 1 | \section{No Cloning} 2 | 3 | \begin{remark} 4 | Basically, there is no unitary transformation which maps 5 | $\braket{\psi} \otimes \braket{0} \mapsto \braket{\psi} \otimes 6 | \braket{\psi}$. Which means you can't clone a quantum state (unless 7 | you already know what the state is). 8 | 9 | This means that quantum state is, in a sense, truly ``private.'' If we 10 | could clone $\braket{\psi}$, we could clone it a lot, and keep trying 11 | to measure in the computational basis. That would tell us the squares 12 | of the probability amplitude. 13 | 14 | We could then keep doing that in the Hadamard basis, to gain the phase 15 | information. If we kept at it, we could figure out what $\psi$ is 16 | through statistical testing. That's what no cloning says we 17 | \emph{can't} do. We can't clone, so we can't run these tests. 18 | 19 | Aaronson suggests that quantum state is private in a similar sense to 20 | the ``bias'' on a coin you get to flip exactly once. 21 | \end{remark} 22 | 23 | \begin{remark} 24 | What is the proof of the no cloning theorem? It is that an operation 25 | which clones would destroy the information in the second bit, which 26 | means the operation cannot be unitary. Since our model restricts us to 27 | unitary operations, our model forbids cloning. 28 | \end{remark} 29 | -------------------------------------------------------------------------------- /randos/quantum-tex/clean-latex: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import sys 4 | sys.path.append("/Users/ruggeri/repos/ruggeri/tech-notes/lib") 5 | 6 | from make_latex import clean 7 | from pathlib import Path 8 | 9 | SOURCE_PATHS = [Path(f) for f in [ 10 | "./quantum.tex" 11 | ]] 12 | 13 | for p in SOURCE_PATHS: 14 | clean(p) 15 | -------------------------------------------------------------------------------- /randos/quantum-tex/make-latex: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import sys 4 | sys.path.append("/Users/ruggeri/repos/ruggeri/tech-notes/lib") 5 | 6 | from make_latex import run 7 | from pathlib import Path 8 | 9 | SOURCE_PATHS = [Path(f) for f in [ 10 | "./quantum.tex" 11 | ]] 12 | 13 | run(SOURCE_PATHS) 14 | -------------------------------------------------------------------------------- /randos/quantum-tex/quantum.tex: -------------------------------------------------------------------------------- 1 | \documentclass[11pt, oneside]{amsart} 2 | 3 | \usepackage{geometry} 4 | \geometry{letterpaper} 5 | 6 | \usepackage{ned-common} 7 | \usepackage{ned-abstract-algebra} 8 | \usepackage{ned-linear-algebra} 9 | 10 | \newcommand{\braket}[1]{|#1\rangle} 11 | \newcommand{\sqtot}{\frac{\sqrt{2}}{2}} 12 | 13 | \begin{document} 14 | 15 | \title{Quantum Computing} 16 | \maketitle 17 | 18 | \input{01-basics} 19 | \input{02-superposition-entanglement} 20 | \input{03-multi-qubit-gates} 21 | \input{04-epr-pair-production} 22 | \input{05-no-cloning} 23 | \input{06-hadamard-in-depth} 24 | \input{07-quantum-teleportation} 25 | 26 | \end{document} 27 | -------------------------------------------------------------------------------- /randos/quantum-tex/xx-sources.md: -------------------------------------------------------------------------------- 1 | * Scott Aaronson 2 | * Now I get it! 3 | * Quantum Computing Without the Physics (Italian TJ Watson guy). 4 | * Super helpful! 5 | * https://quantumcomputing.stackexchange.com/questions/4430/quantum-teleportation-second-classical-bit-for-removing-entanglement 6 | * This helped me understand *a lot*. 7 | * The geometry section on Wikipedia for Grover's algo was actually quite 8 | useful! 9 | -------------------------------------------------------------------------------- /randos/quantum/03-no-cloning.md: -------------------------------------------------------------------------------- 1 | ## No Cloning 2 | 3 | Basically, there is no unitary transformation which maps `|psi> 4 | \otimes |0>` to `|psi> \otimes |psi>`. Which means you can't clone 5 | a quantum state (unless you already know what the state is). 6 | 7 | This means that quantum state is, in a sense, truly "private." If we 8 | could clone psi, we could clone it a lot, and keep trying to measure 9 | in the computational basis. That would tell us the squares of the 10 | probability amplitude. 11 | 12 | We could then keep doing that in the Hadamard basis, to gain more 13 | information. If we keep at it, we can figure out what psi is through 14 | statistical testing. 15 | 16 | That's what no cloning says we *can't* do. We can't clone, so we can't 17 | run these tests. 18 | 19 | Aaronson suggests that this is like trying to figure out the fairness 20 | of a coin based on one coin flip. 21 | -------------------------------------------------------------------------------- /randos/quantum/04-epr-pair-production.md: -------------------------------------------------------------------------------- 1 | To produce an entangled EPR pair, start with `|00>`. Then apply `H` to 2 | the first qubit to get a superposition: 3 | 4 | (1/\sqrt{2}) (|0> + |1>) \otimes |0> 5 | 6 | Next, do a controlled not operation to the second qubit. This is the 7 | part that entangles them. You get: 8 | 9 | (1/\sqrt{2}) (|00> + |11>) 10 | 11 | Done! 12 | -------------------------------------------------------------------------------- /randos/quantum/xx-sources.md: -------------------------------------------------------------------------------- 1 | * Scott Aaronson 2 | * Now I get it! 3 | * Quantum Computing Without the Physics (Italian TJ Watson guy). 4 | * Super helpful! 5 | * https://quantumcomputing.stackexchange.com/questions/4430/quantum-teleportation-second-classical-bit-for-removing-entanglement 6 | * This helped me understand *a lot*. 7 | * The geometry section on Wikipedia for Grover's algo was actually quite 8 | useful! 9 | -------------------------------------------------------------------------------- /randos/rooting.txt: -------------------------------------------------------------------------------- 1 | Phone operating systems don't typically give you unrestricted access 2 | to do whatever you want. 3 | 4 | In the Android world, which runs a *nix like system, you can *root* 5 | the phone, which means you gain root user privileges. I did this on my 6 | Pixel 2 by unlocking the bootloader and loading TWRP. 7 | 8 | *Unlocking* the bootloader allows you to boot operating systems not 9 | signed by the manufacturer. Most Android phones don't let you do this; 10 | Pixel 2 makes it very easy. 11 | 12 | After booting into TWRP, you can flash the recovery image with a 13 | patched version of the OS. You then recover the patched version. The 14 | patched OS is specific to the phone. 15 | 16 | Note: you don't need to root to install apps distributed outside the 17 | App Store. This can always be done by *sideloading*. 18 | 19 | However, you need root to have root access and modify the system. 20 | 21 | I'm not sure how Magisk fits into this. I think that maybe it is an 22 | app that manages being able to enable root. I think the TWRP images 23 | maybe have something to do with "magiskboot". 24 | 25 | LineageOS is the community continuation of the CyanogenMod 26 | project. They didn't have a build for Pixel 2 when I got mine. It is 27 | like a distro of Linux. 28 | 29 | **I have probably messed up many important details** 30 | -------------------------------------------------------------------------------- /randos/ssd.md: -------------------------------------------------------------------------------- 1 | On an SSD, a page needs to be reset before you can write it. So if you 2 | want to update a page, you need to read it, and write it to some blank 3 | page elsewhere, updating the logical mapping describing where that 4 | page lives physically. 5 | 6 | Of course, sometimes you have to erase pages. The problem is that 7 | pages are erased in bigger chunks than they are written. So you have 8 | to read a bunch more memory, clear it, then rewrite it back with the 9 | one change. Note that in fact you must *move* the big erased page, 10 | because you need to ensure that you can recover from a failure. 11 | 12 | One annoying bit: if you are constantly fsyncing small changes to a 13 | log, you're going to keep moving a bunch of data around the 14 | drive. This will also wear it down substantially, which will reduce 15 | lifespan, since each cell can only be written so many times. 16 | 17 | But that sucks, because you may need to fsync for transactionality 18 | purposes in your application. 19 | 20 | To give an idea of IOPS, 15k RPM drives give around 200 random IOPS. A 21 | good SSD nowadays gives more like 100k IOPS. It looks like for 22 | sequential on a rotational drive, you might get ~15k IOPS. 23 | 24 | https://en.wikipedia.org/wiki/Write_amplification 25 | -------------------------------------------------------------------------------- /randos/theorem-proving-books.md: -------------------------------------------------------------------------------- 1 | * https://cel.archives-ouvertes.fr/file/index/docid/475807/filename/coq-hurry.pdf 2 | * http://isabelle.in.tum.de/doc/tutorial.pdf 3 | * https://softwarefoundations.cis.upenn.edu/ 4 | * https://www.amazon.com/Logic-Computer-Science-Foundations-Automatic/dp/0486780821 5 | -------------------------------------------------------------------------------- /web/production-rails.md: -------------------------------------------------------------------------------- 1 | 0. You'll have to create the prod database and migrate it. 2 | 1. You have to `rails -e production -p 8080` 3 | 2. You have to setup nginx to proxy. 4 | * `/etc/nginx/sites-enabled/default` 5 | * https://www.nginx.com/resources/wiki/start/topics/examples/loadbalanceexample/ 6 | * service nginx reload 7 | * BTW: HAProxy is a lot like Nginx. Difference is that Nginx 8 | understands HTTP, while HAProxy just proxies TCP. 9 | 3. Have to compile assets for production. 10 | * Also need to install `rails_serve_static_assets`, unless you want 11 | to configure nginx for this... 12 | 4. May have to use `crontab -e` to setup recurring Rake tasks. 13 | -------------------------------------------------------------------------------- /web/redcarpet_rouge.md: -------------------------------------------------------------------------------- 1 | So to tell Slim to render fenced code blocks, you need to configure 2 | Redcarpet(the markdown renderer): 3 | 4 | ``` 5 | Slim::Engine.set_options( 6 | :markdown => { 7 | fenced_code_blocks: true, 8 | } 9 | ) 10 | ``` 11 | 12 | Then, further, if you want to use Rouge to do syntax highlighting, you 13 | need to extend the HTML rendering engine: 14 | 15 | ``` 16 | require 'redcarpet' 17 | require 'rouge' 18 | require 'rouge/plugins/redcarpet' 19 | require 'slim' 20 | 21 | # Not sure why I can't just extend Redcarpet::Render::HTML 22 | class MyHTML < Redcarpet::Render::HTML 23 | # This is a plugin to swap out the code_block rendering code 24 | # and wrap it with Rouge stuff. 25 | include Rouge::Plugins::Redcarpet 26 | end 27 | 28 | # This is how we configure Slim's use of redcarpet. 29 | Slim::Engine.set_options( 30 | :markdown => { 31 | fenced_code_blocks: true, 32 | renderer: MyHTML.new(render_options = {}) 33 | } 34 | ) 35 | ``` 36 | 37 | See how we've told Slim to use our extended engine? 38 | 39 | Now download a stylesheet from pygments-css and add it to your vendor 40 | stylesheets. Weird, you have to add this to your MyHTML: 41 | 42 | ``` 43 | def rouge_formatter(lexer) 44 | Rouge::Formatters::HTMLLegacy.new(:css_class => "codehilite #{lexer.tag}") 45 | end 46 | ``` 47 | 48 | The change is that Rouge uses `highlight` but pygments-css needs 49 | `codehilite`. Otherwise this method is just a copy. 50 | 51 | The background color may not render properly, so you might have to add 52 | that. Odd. 53 | 54 | I liked github.css pretty well, which looks nice on a mostly white 55 | background site. 56 | -------------------------------------------------------------------------------- /web/resque.md: -------------------------------------------------------------------------------- 1 | * To setup resque, you need to install redis. 2 | * You write a job in `app/jobs`. That's simple. 3 | * You configure rails to use resque. Simple. 4 | * You need to run a resque worker to do the job. You can do this via: 5 | 6 | QUEUE=reminder_* rake environment resque:work 7 | 8 | * (reminder is the name of my app). 9 | * But to run it as a service, that involves systemd. So I wrote 10 | `/etc/systemd/system/reminder_resque.service`: 11 | 12 | ``` 13 | [Unit] 14 | Description=Resque worker for reminder 15 | 16 | [Service] 17 | ExecStart=/bin/bash -c 'cd /home/ubuntu/reminder \ 18 | && RAILS_ENV=production QUEUE=reminder_production_* \ 19 | /home/ubuntu/.rbenv/versions/2.3.3/bin/bundle exec \ 20 | /home/ubuntu/.rbenv/versions/2.3.3/bin/rake environment resque:work' 21 | ExecStop=/bin/bash -c 'kill $MAINPID && wait $MAINPID' 22 | ``` 23 | 24 | * This is pretty weaksauce probably, but it does what I need. 25 | * You can start by `sudo systemctl start reminder_resque.service`. 26 | * To be able to schedule jobs for a future time (instead of just queue 27 | them to be done ASAP), you need `resque-scheduler`. 28 | * You'll have to make a service for this, too. 29 | * You need to add `require 'resque/scheduler/tasks'` to your Rakefile. 30 | * To run the scheduler, you want: `bundle exec rake environment resque:scheduler` 31 | --------------------------------------------------------------------------------