The only catch is that I’m doing this for fun. I can’t take every project, so I might start with the ones I think are most interesting. If I can’t get to the project, maybe someone else could. The first thing I probably need to do is set up a website of wishes and wish-granters, but meanwhile, you can just reply to this post. Teachers, would you be interested in something like this? What kinds of educational projects can you envision? Interested bystanders, would you volunteer your skills?
Here’s a brief list of my skills.
Need the perfect diagram and can’t find it? I’ll make it better.
Need a small interactive website, simulation, or program?
Need to analyze that concept inventory or unit test? Scrub the student information (replaced with teacher-chosen student IDs), and I can find the story that your data whispers.
I have other skills too, probably. Just ask. I would love to hear from you.
Homepage: http://kcksumner.schoolloop.com/
The dry statistics are here: http://online.ksde.org/rcard/building.aspx?org_no=D0500&bldg_no=8322 (or at least they are supposed to be when the server is not down)
More information is here: http://kckps.org/index.php/high-school-report-cards/450-sumner-academy
We are consistently ranked as the number one school in Kansas by Newsweek, Washington Post, and U.S. News & World Report, but these metrics are nothing next to the chance to work with other teachers curious about science and committed to their craft, teaching students who merit the best teaching.
Opportunities for high-level curriculum development abound. New teachers will most likely teach some combination of Science 8, Physics 9 (only a general KS science certification needed), and Biology (currently 10th grade). See below. Chemistry teachers are also encouraged to apply, to teach Science 8, Physics 9, and (in a later year) Chemistry 10. During our PCB transition, we will need people who know chemistry. Not all courses have traditionally used Modeling, but two of our teachers have been trained in Modeling Physics and use it, and the others are receptive to Modeling.
Please first email a resume and cover letter to our principal, “Mr. Jonathan Richard” <Jonathan.Richard@kckps.org> . You are welcome to refer to this posting by “Brian Vancil”. Then, apply for a job listing (not all of the three jobs have yet been posted) in the Kansas City, Kansas Public Schools database at https://kckps.tedk12.com/hire/index.aspx . For instance, https://kckps.tedk12.com/hire/ViewJob.aspx?JobID=3482 . You have to click on the posting to see the “Anticipated Location(s): Sumner” field. Here are direct links to the job postings:
We’d love to work with you!
This is my attempt to arrange the ideas.
Here are the progressions found in Appendix E of the standards. I do digress into talk of matter and substance when it supports later understanding of atoms. I’ve expanded these to list ideas explicitly and separately.
Let me know if you think I’ve forgotten anything here!
The AAAS has a great website under the auspices of Project 2061 that lists ideas and misconceptions related to Atoms, Molecules, and States of Matter.
Arons identifies four lines of evidence necessary to build an early quantum model of the atom:
TODO: I’d like to work on a Learning Landscape, Knowledge Packet, or Learning Progression synthesizing these sources, but that will have to be added later.
About the biggest controversy is disagreement over the need to teach pseudo-historically these models. This is leaving out all the really bad ones. However, the terrible picture that society has adopted as the meme for atom (see below) affects student perceptions of the atom.
It would be nicer if students came into classrooms with the following conception of an atom.
Physics teachers tend to like the Bohr model in that it can quickly (although magically) explain the Rydberg formula. However, there are many reasons to dislike the Bohr model.
TODO: What classroom experiments or simulations could help students to progress in their knowledge of atoms?
In introducing the concept of units to my introductory classes, I tell them that a quantity has a unit when we have an operational definition for assigning a number to it and that the number that results depends on the choice of an arbitrary standard. Since the standard is arbitrary, we may only equate quantities (or add them) if they change in the same way when we change our standard. Otherwise, a numerical equality that we obtain might be true for one choice of a standard but not for another. An equation that has physical validity ought to retain its correctness independent of our arbitrary choices.
Many physicists will recognize in the above the “scent of Einstein” – the idea that “a difference that makes no difference should make no difference.” Einstein, Poincaré, and others around the turn of the last century introduced the idea of analyzing how physical measurements change when you change your perspective on them – rotate your coordinates (in the case of identifying vectors and tensors), hop onto a uniformly moving frame (in the case of special relativity), or make a general non-linear coordinate transformation (in the case of general relativity).
Of course, the transformations that we use with units do not depend on spatial coordinates, but considering the symmetry transformation explicitly punched me in the gut because I realized that we don’t teach this explicitly to students in science classes. Students trying to to convert units typically try either of two strategies where I work. The first (used with metric conversions) is to consider how many factors of 10 there are between the units (like km and cm) and then move the decimal place that many times. If students remember this procedure, they as often move the decimal point the wrong direction. The second is to use the unit-factor method, wherein they often try “fun” factors like .
At least for proportional units—let’s leave affinely related units for another time—we can regard a quantity as a pair , where lives in some mathematical space, and is our unit, a physical “arbitrary standard”, as Redish calls it, which lives in some torsor for the multiplicative real numbers . On this space, we can define an equivalence relation for physical equivalence, , where is a scale-factor. In my mind this leads to the following exercise:
The width of a desktop is measured in units of cm and found to be 134.4 multiples of 1 cm, or in other words, 134.4 cm. If, instead, one had measured it in a different unit, how many multiples of the new unit would the length be? Try for as many different units as you can.
This gets at two ideas. Students don’t always think in terms of 3 cm being 3 multiples of a centimeter length. They also don’t often get to choose what measurements they use, or the teacher has them convert in predictable ways. Here they can see that (134.4, cm)≡(134.4/10,cm*10)≡(134.4*10,cm/10). Would this help them build a chain of understanding of units? I realize that when I convert, I explicitly play the epistemic game in my head, “m is a bigger unit than cm, so it takes less of that length”. Do all students have that understanding?
The other epistemic game I play in my head is to think, “134.4 cm is about 100 cm. I know 100 cm is the same as a meter, so 134.4 cm is a little bigger than a meter. It’s definitely less than 2 meters, since 2 meters is the same length as 200 cm. I’m pretty sure that 134.4 cm works out to 1.344 m.”
How do you teach students to understand units in elementary, middle, and high school? I’d love to know what conceptual tools or epistemic games you give students.
Since I’m licensed to teach both, I’ll try. My invented audience here is all math teachers, from pre-K through post-secondary. I’m only too happy to revise this list from your feedback or new evidence about how students learn math and science.
Edit on 2015-03-13: I’ve added below based on suggestions from @BlackPhysicists to include Inverse Function Theorem, Implicit Function Theorem, injections, “bifurcations and transitions to chaos”, and hysteresis.
Secondary mathematics builds mathematical models (essentially parent functions) through the course of high school education. Although some of these could go in the sections above, I wanted to collect them here.
Quantities in physics can be classified by how they behave under rotation. This is a richly beautiful subject called group representations. What do we want students to be able to do? We hope that they can reason using each of these types of quantities. Graphical representations of vectors have not been as useful to students as we would have hoped (Heckler and Scaife, 2015). Algebraic representations seem to be easier to grasp, but that doesn’t mean that we should ditch graphical representations. It does mean that we should choose algebraic and graphical representations carefully, so I hope in this section to begin a conversation about approaches to vector analysis.
cd /dev sudo ln -s tty.Fluke2-05FB-Fluke2 tty.scribbler
You will need to replace “05FB” with whatever you see when you type ls -l /dev/tty.*
from Terminal.app
. Then I could type in the StartCalico.app
:
from firmwareupgrade import * upgrade("fluke", port="/dev/tty.scribbler") upgrade("scribbler")
StartCalico.app
using Myro):
from Myro import * init("/dev/tty.scribbler")
Now, according to Matt Greenwolfe, the next step is to write low-level movement routines that use the wheel encoders.
The idea for the poster comes from Frank Noschese, but blame me for the implementation. As a poster it’s not great yet. The example is just there with no prompts for how to use it. A graph would be helpful. These I’ll add later if the basic idea is sound. It could also do with a different example.
I’ll post images of the various incarnations below.
Update 2013-06-16: Based on feedback from Josh Gates, I’ve changed the last conceptual tool.
This year when how I graded standards starting morphing from what they actually said, I realized that it was time to pick new standards. As I started to imagine how I would change them, I realized that I have no systematic way to do so. In teaching the subject of kinematics (in particular, uniform acceleration, i.e. CAPM), solving a problem involves motion concepts, multiple representations of motion (words, tables, Motion Maps, position-versus-timer-reading graphs, velocity-versus-timer-reading graphs, acceleration-versus-timer-reading graphs, equations, and even some weird ones like position-versus-velocity graphs, etc.), translations between representations, and features of representations that correspond to motion concepts. I can think of at least four broad ways to do this:
The formulas in parentheses give a rough estimate of the number of possible standards for N motion concepts and n different representations. However, these are not the only considerations. I have more-or-less made the decision that the level of difficulty should increase with standards added and not by increasing the burden of proof or morphing the standards, but this year I occasionally evolved from a mainly qualitative understanding sufficing to requiring a quantitative understanding at the end of a unit. I’d much rather have this built into standards, which should be clear on whether they require a qualitative or quantitative understanding. Which types of standards should bifurcate into qualitative and quantitative flavors? Which overall method of choosing standards is most conducive to student morale and success? Is there any research about which kind of SBG standards are most conducive to learning?
So far I’ve found some prior work on SBG and PER/multiple representations:
It looks like this area is a bit weakly developed, and it’s definitely not very researched.
The basic motion data:
The derived data are:
Here’s my short summary of the basic features of constantly accelerated motion.
Taking data at uniform time intervals:
I’ll try to point out common misconceptions as we go, but some relevant from the get-go are:
In crafting standards, I’ll try to limit them to between 5 and 10 standards total per unit, and one of those will be something like:
OK, I think I lied. I tried to make it about the concepts and relationships among the concepts, but I did it using representations. Without some sort of representation, it’s way too abstract to judge the student work. Do we just want them to write on a test, “The area under a velocity graph gives displacement”, or do we want them to use this idea to do something?
Pros: This is closer to the spirit of what I really want them to know. It’s not about learning the representations. In this method, those are just tools to communicate and refine one’s ideas while allowing one to make calculations. This method more directly targets misconceptions.
Cons: Grading means that the teacher’s eyes must jump around the page looking at all the representations. What if four are right but one is wrong? Mark it a “not yet”? What if I miss this tiny detail on another student’s paper? I grit my teeth and go back through all the grading to make sure I was consistent. Some of the standards are a little vague. The difficulty of the standards change depending on the type of data students are provided. These are the kind of standards that might make one’s head ache after grading 100 quizzes.
Interpretation of a representation is hard to assess without correct explanation or another representation. Therefore, I focus only on creating each representation. I also eschew the good grammar of “constantly accelerated motion” for “constant acceleration motion”. The former seems to me to connote non-zero acceleration (and I have ELL learners), but maybe that’s just me.
Here I still have the problem of requiring these representations to be qualitative or quantitative. I want students to be able to do both, but on assessments, I typically want quantitative accuracy.
Pros: This is easy to grade by correctness.
Cons: It does penalize students when they mess up one representation, for they are likely to mess up the others. They would have to reassess to correct the problem, but this is probably OK.
words, Motion Map, s-t graph, v-t graph, a-t graph, tables, and equations
I left out translation between tables and equations and everything else because we don’t often use them. That still leaves 5 representations for 20 permutations way too many! I left the ones I expect them to do. Since it gets tedious to read a bunch of the same language, I represented it as a digraph:
The evidence is the faithfulness of the translation.
Pros: It’s very easy to grade this way for simple tasks. You gave students a description of motion and asked them to make a position-vs-timer-reading graph? Dead easy.
Cons: It’s harder to grade for more complex tasks. IF you gave students a description of motion and asked them to make a variety of graphs and a motion map, can you tell the order they used? Did they go from a motion map to a position-vs-timer-reading graph or vice versa? Can you tell? This is enough to disuade me from using this method. Also, propagation of error: If students mess up one translation but correctly translate that wrong representation, is it OK? Or, do you want them to recognize when what they’re doing is not CV or CA motion?
Some of the features in the digraph below are from a previous blog post on analyzing velocity-vs-timer-reading graphs. Each of the features may be specified as occurring given some other kinematic event, like position when the object has reached a given speed, for instance. Again I have had to restrict myself to what I will assess. I based the arrows between objects on my current practice. I expect students to be able to do more, but I only included the most cognitively demanding tasks for assessment.
Note that if you were expecting, as I was, the concepts to match up to the features, you were sorely disappointed.
Pros: These are easy to grade when students show work near the representation or by annotating it, which I show my students how to do. It’s also very clear for students what they should be able to do.
Cons: If students don’t organize work well, it can be as hard as the previous method when students translate between representations in an unexpected manner. It also prescribes the method that the student must use. For instance, why do they need to find displacement from a velocity graph directly when they have already used that skill implicitly in creating a position graph? (Answer: For those students, I simply suggest being more explicit in annotating their velocity graphs.)
In trying to get better at teaching graphical representations of uniformly accelerated motion, I tried something similar to Kelly O’Shea’s paradigm laboratory for the Constant Acceleration Particle Model (CAPM). Being not so brilliant at teaching myself, I think I injected a little too much of myself into it, but it was clear from a “pre-test” (review of Constant Velocity Particle Model graphs going into this unit) that students were confusing velocity and position, didn’t remember much about how to analyze velocity-vs-timer-reading graphs, so we needed to do some review. Part of this is the insanely long time between CVPM and CAPM, since we went ETM⟶CVPM⟶MTM⟶BFPM⟶CAPM⟶UBFPM. If I do this again, I will need to include more model-based reasoning problems that incorporate CVPM throughout the previous units. Thus, I found myself trying to come up with a list for us to make to summarize our large number of different scenarios in the CAPM paradigm (class) lab. For instance, in one of my periods, we had (among other things):
I had the students annotate each segment of each graph with how the speed was changing (“v↑” or “speeding up”, “v↓” or “slowing down”), which direction it was going (“↑r” or “up the ramp”, “↓r” or “down the ramp”), and what was going on (“” for nothing, “pushed”, “stopped”, etc.). We made important notes besides the graphs that students drew in their lab notebooks, such as “☆The mass doesn’t seem to affect the graph very much.” Then we tried to summarize what we could tell with a table like:
Feature of the graph | ⟷ | Feature of the motion |
---|---|---|
point on the graph given by a pair | ⟷ | a data point, i.e. snapshot, of an object moving with a velocity at a timer reading |
horizontal position of a point on the graph | ⟷ | timer reading |
vertical position of a point on the graph | ⟷ | velocity |
vertical distance of a point from the timer reading () axis (the line) | ⟷ | how fast it is moving (speed ) |
position of a point above(+)/on(0)/below(-) the timer reading () axis (the line) | ⟷ | which direction it is going |
steepness of the graph’s slope in the neighborhood of a point | ⟷ | how fast the velocity changes |
sign of the graph’s slope in the neighborhood of a point | ⟷ | which direction the velocity is changing (somewhat artificial) |
the graph’s slope in the neighborhood of a point is moving [away from (+), parallel to (0), or toward (-)] the timer reading () axis (the line) | ⟷ | speeding up (+), maintaining a constant velocity (0), or slowing down (-) |
point on the timer reading () axis (the line) where the slope crosses the axis from negative to positive or vice versa. | ⟷ | changing direction |
? | ⟷ | ? |
Some classes were able to come up with their own entries. In others, I had to debase myself by suggestion them. In one class, it worked rather well to express my frustration that no one was saying anything, put a student in charge, and tell them that I would be silent while they figured it out. They put much wrong on the board, but they were jumping back to correct things they realized were wrong when trying to identify how to tell some of the other kinematic features; then the bell rang! I tried a similar approach in another class but didn’t give them enough awkward silence before going into silent mode myself. That class didn’t bother checking whether they hypothesized connections actually worked and weren’t given enough time to find out. I jumped into it with a few minutes to go and proceeded to ask them questions to test their statements, destroying all of them. I felt like rain on their parade. I had a hard time even convincing them that their statements were wrong because they could not tell me, given two points on the graph, which was moving faster. I intend to ask more questions like this next year during CVPM. I wanted to cry for them, and I was angry at all (including partly myself) who failed teaching them how to read a graph. Every year we say, “These are smart kids. They should be doing better on the science part of the ACT.” Now we know why. (Thus, I added the first three lines of the table above.)
I know this can’t be the best way to teach this. Engagement was low, and the whole paradigm lab had a demonstrative feel.
Who says we can’t have fun in science? Feel this hot ruler! It’s ’cause of molecules, baby!
a student, on the chance discovery that bending plastic disordered the molecules, transferring her chemical energy to kinetic energy to thermal energy
This post is about a quick writeup on interactions that I made this weekend for students. Since modern physics so often loses to more concrete (and older) material when teaching physics, I wanted to give my students a flavor of the kind of particle physics done in the last 200 years. I thought that I’d let them read this and decide which interactions we should take into account when drawing system schema and force diagrams. I’m not completely satisfied with my treatment of the normal force and the Pauli Exclusion Principle within the article. My language is also a little overblown, but believe it or not, I did tone it down and cut out adjective chunks like “low-energy effective” and replace with “everyday” when describing interactions. Anyway, PDF and XeLaTeX source are below.
While I’m at it, I should also describe the way I teach system schemata. Because I taught energy first, I didn’t want to distract with forces. Also, interactions are not exactly forces; forces describe the effect of an interaction on a particular object. Hence, I felt content to label interactions with simple codes for the type of interaction. Last year I used with a superscript for the type of interaction (with the “dealer→feeler” as a subscript). I like it for uniformity and the fact that it emphasizes that each type of force is indeed a force, but unfortunately, few college-level textbooks do this. When the emphasis is solving problems, a more succinct notation wins, so I thought that this year I’d give students a choice of force symbols. I hope to learn more about how students approach physics with their choices and to notice which approach is more effective at making connections between interactions, interaction energies, and forces. Examples (In the “Force Symbol” column, the first symbol is my version from last year and the second is what I typically see in problem-solving-focused mechanics textbooks):
Interaction | Interaction symbol | Interaction energy symbol | Force symbol |
---|---|---|---|
gravitational | g | or | |
normal | n | N/A | or |
frictional | f | N/A | or |
tensional | t | N/A | or |
elastic | el | or ? | |
electric | e | or ? | |
magnetic | m | or ? |
From where do our everyday interactions come? (interactions.pdf)
%!TEX TS-program = xelatex %!TEX encoding = UTF-8 Unicode \documentclass[12pt,twocolumn]{article} \usepackage{geometry} % See geometry.pdf to learn the layout options. There are lots. \geometry{letterpaper} % ... or a4paper or a5paper or ... \geometry{twoside, inner=1.9cm, outer=1.2cm, top=1.2cm, bottom=1.2cm} %\geometry{landscape} % Activate for for rotated page geometry %\usepackage[parfill]{parskip} % Activate to begin paragraphs with an empty line rather than an indent \usepackage{graphicx} \usepackage{amssymb} \usepackage{fontspec,xltxtra,xunicode} \defaultfontfeatures{Mapping=tex-text} \setromanfont[Mapping=tex-text]{Baskerville} \setsansfont[Scale=MatchLowercase,Mapping=tex-text]{Gill Sans} \setmonofont[Scale=MatchLowercase]{Andale Mono} \title{From where do our everyday interactions come?} \author{Brian Vancil} %\date{} % Activate to display a given date or no date \usepackage{tikz} \usetikzlibrary{mindmap} \definecolor{royalblue}{rgb}{0,0.13725490196078433,0.4} \definecolor{royalblueweb}{rgb}{0.25490196078431371,0.41176470588235292,0.88235294117647056} \definecolor{burntorange}{rgb}{0.8,0.3333333333333333,0} \definecolor{silver}{rgb}{0.75294117647058822,0.75294117647058822,0.75294117647058822} \begin{document} \maketitle \begin{figure}[!ht] \caption{Known and guessed-at interactions in nature} \label{intdiag} \centering \begin{tikzpicture}[concept color=silver,text=white, root concept/.append style={concept color=burntorange,text=white}, level 1 concept/.append style={every child/.style={concept color=silver,text=black}, sibling angle=40,level distance=4.5cm}, level 2 concept/.append style={every child/.style={concept color=royalblueweb,text=white}, sibling angle=40}, level 3 concept/.append style={every child/.style={concept color=silver,text=black}, sibling angle=40}, level 4 concept/.append style={every child/.style={concept color=royalblueweb,text=white}, sibling angle=90}, level 5/.style={every child/.style={concept color=silver,text=black}, sibling angle=40}, mindmap] \node [concept] {Theory of Everything?} [counterclockwise from=250] child {node[concept] {Grand Unified Theory?} [counterclockwise from=250] child {node[concept] {strong int\-eraction} [counterclockwise from=270] child {node[concept] {nuclear interaction}} } child {node[concept] {electroweak interaction} [counterclockwise from=252.5] child {node[concept] {weak interaction}} child {node [concept] {electro\-magnetic interaction} [counterclockwise from=225] child {node [concept] {electric interaction} [counterclockwise from=215] child {node [concept] {normal interaction}} child {node [concept] {tensional interaction}} child {node [concept] {elastic interaction}} child {node [concept] {frictional interaction}} } child {node [concept] {magnetic interaction}} } } } child {node[concept] {gravitational interaction}}; \end{tikzpicture} \end{figure} We know of only a few fundamental interactions in nature, and one human project in physics has been to explain everything that we observe in terms of simpler interactions. If you look at Figure~\ref{intdiag}, as you move down the diagram, you will find interactions that describe more and more specific circumstances. Each is useful, but on their own they explain very little. In the history of science, humans have worked upward in Figure~\ref{intdiag}, taking scientific models that appeared to be different but were really different aspects of the same thing and unifying them into a single simpler theory that explained more. For instance, the electric interaction is responsible for the following everyday interactions: \begin{description} \item[normal interaction]The repelling squishiness of matter as two objects push against each other is due to two sources: (1) electric interaction involving negatively charged electron clouds around positively charged nuclei and (2) the Pauli exclusion principle---not really an interaction!---between the electrons. Something as simple as sitting in a chair involves a normal interaction. So does air pressure and air resistance. \item[tensional interaction]The electric bonding of negatively charged electron clouds to positively charged nuclei creates an attractive intermolecular interaction so that when a substance is stretched by a bit, it tends to pull back together. Parts of a rope pull on other nearby parts of a rope through a tensional interaction. \item[elastic interaction]An extreme form of the tensional interaction, in which matter changes its shape by a lot. Springs and elastic are good examples. \item[frictional interaction]Friction is not completely understood, but it involves the grinding together and shearing of irregular surfaces along each other. Hydrocarbon molecules also play a role between the surfaces. Something as simple as walking across a floor requires friction. \item[electric interaction]Electrically charged objects attract or repel each other depending on whether their charges are opposite or the same, respectively. If you have ever experienced static electricity, you know this well. \end{description} As an example of how the human project of searching for simpler explanations progressed, the \textbf{magnetic interaction} is familiar from magnets, but it was discovered in the 1800s that electric and magnetic interactions are really both part of a single \textbf{electromagnetic interaction}, which is responsible for the static electric interactions already mentioned, electricity, magnetism, and even light (really the entire spectrum of electromagnetic radiation). In the second half of the 1900s, physicists learned to describe both electromagnetic interaction and the \textbf{weak interaction} (responsible for many forms of radioactive decay) by a single theory of the \textbf{electroweak interaction}. Attempts have been made to unify the \textbf{strong interaction} (responsible for both the nuclear interaction that holds protons and neutrons together in the nucleus of an atom and for the interaction that holds quarks together within protons and neutrons) with the electroweak interaction into a single interaction. These theories go by the name of Grand Unified Theories, but all of them predict types of matter that we haven't seen yet. Also in the 1900s, physicists worked to unify the gravitational interaction with the other types of interactions to create a so-called Theory of Everything. Most of those attempts failed, but we humans have learned a lot from the failures, and we are still at it. In addition to the human project of unifying interactions, there is also an opposite human project of using these interactions to describe more and more complex systems of particles, everything from neutron stars to superconductors to everyday materials. Will there ever be an end to the human drive to organize and explain the universe? \end{document}
One has only to google “definition of temperature” to realize the problem: Once one skips the dictionary definitions about temperature being a scale of hotness (which, as I will argue, is arguably better than what follows), one gets to definitions that say something about how temperature is a measure of the average translational kinetic energy of the atoms in a substance. (I won’t link to these because I don’t want to increase their Google PageRank.)
Compare to NGSS draft 2: DCI PS3.A: “Temperature is a measure of the average kinetic energy of particles of matter. The relationship between temperature and the total energy of a system depends on the types, states, and amounts of matter present.”
It’s a nice picture. It’s just not hard to break. For instance:
Even if you try to correct for this naïvely by specifying that the kinetic energy be measured in the center of mass frame:
Or even worse:
HyperPhysics has the best bad definition of temperature. However, I wouldn’t go as far as they do and call it an “operational definition”. What is “kinetic energy associated with disordered…motion”? What counts as disordered motion? Is a sound wave (phonon) disordered? It’s what we might think of as a vibration (which many texts cite when they talk about temperature). How do we measure disorder for the purpose of calculating the “associated” kinetic energy? Why doesn’t rotational energy count? What do we mean by “measure” (another problem with calling it an operational definition)—even if I account for all the “disordered” kinetic energy, what value do I put down for temperature?
What’s the simplest system we could envision that breaks this definition? Note that to break the measure part, there must be a non-monotonic relationship between temperature and kinetic energy. Candidates include:
HyperPhysics also gets credit for this good definition. However, the authors don’t give any examples where the behavior of temperature differs drastically from the simplistic idea.
The standard definition in thermodynamics is:
However, the standard definition of temperature in statistical mechanics is:
Does this always work? What does it mean? Without teaching entropy well, this might be hopeless. However, I recently read an interesting paper on a simple model to introduce the need for entropy as a thermodynamic variable:
I envision a tripartite system of teleological, conceptual, and operational definitions, where we scale up the complexity of the conceptual definitions (i.e. develop new models for temperature) as a student progresses through the system. This has to be explicit, or students won’t understand either the nature of science or why their old ideas are not quite right.
Even this isn’t obvious and needs some justification. If we have three systems in equilibrium, A, B, and C, why couldn’t A heat B, B heat C, and C heat A if they were brought into pairwise contact?
It does get the point across why we care about temperature, which helps to ground our other definitions and provide continuity in the notion of temperature.
This should be connected to the idea of heating and reference temperatures (from special systems with understood behavior). It can later include the idea of different scales. The idea of absolute zero should come from an extrapolation of gas law data from a student experiment.
Note that we’re pretty close to a bad definition but that it’s always augmented by our teleological and operational definitions. This is not worth getting to until students understand that matter is made of molecules. Students should see simulations of matter at different temperatures to get a feel for what we’re talking about. It’s simplistic and qualitative.
That is, roughly, it’s the ratio of added energy to the change in entropy that results. We’re moving past our bad definitions.
Regular absolute temperature fails our teleological definition at negative absolute temperatures. This new definition gets the direction of thermal transfer right when two systems come into contact. See, for instance:
A 2 kg melon is balanced on your bald uncle’s head. His son, Throckmorton, shoots a 50 g arrow at it with a speed of 30 m/sec. The arrow passes through the melon and emerges with a speed of 18 m/sec. Find the speed of the melon as it flies off the man’s head.
became
There’s a 2-kg (read: Kay-Gee) melon on your bald uncle’s head.
Cousin Throckmorton’s crossbow looks to shoot him dead,
but his arrows are true and are 50 g (read: Gees)
and will pierce the right melon if his daddy don’t sneeze.
When he pulls the trigger, all the physics lovers scream,
and the arrow’s speed drops from 30 to 18
meters per second!
To calculate how far a marble will roll along the floor after being dropped down a tube leaves the comfortable world of school physics and enters the real world of rolling friction on cardboard and carpet. If the tube is close to vertical, the marble will be going quite fast as it exits the tube but won’t go very far horizontally. If the tube is close to horizontal, the marble will be going quite slowly as it exits the tube. Our intuition tells us that neither will work as well as somewhere in the middle, so we can search for the angle that maximizes the horizontal distance of the marble along the floor. This is a case for which the rolling resistances in the tube and on the floor will partly “cancel”, and my daughter’s play tests show that we can get pretty good results just by tracking the proportionality of the velocity to a function of the angle .
What physics models do you have to use to solve this problem?
Before our winter break this year, we started building the Momentum Transfer Model, which, at its start, is really about conservation of momentum. It wasn’t good timing with so little time with which to work. When students watched me collide Cart A (moving) with Cart B (at rest), which would then stick together with Velcro, they suggested various things I could try that might affect the final motion of the carts. They quickly settled on the masses of the two carts and the initial velocity of Cart A. To explore these factors as independent variables, I split students into groups, 1/3 of which were to investigate Cart A’s mass, 1/3 of which were to investigate Cart B’s mass, and 1/3 of which were to investigate Cart A’s initial velocity. As variables go, changing the masses in such a way to keep the total mass constant would be cleaner, but this is hindsight. I wanted to give students a feel for how to gain this kind of hindsight. The groups who experimented with Cart A’s initial velocity had a singularly easy job. They only had to control the variables of the two carts’ masses, and they just pushed Cart A with various speeds and used a motion detector to measure both initial and final velocities. Despite this, some groups didn’t pay attention to whether their data made sense. Some groups had data that showed increasing initial velocities of Cart A but blips where the final velocity of the cars decreased before increasing again. Some groups did notice it, and they rechecked the results until they got something consistent. The groups who experimented with Cart A’s mass also had the second easiest assignment. Despite our shortage of track, they were given an extra piece of track to use as a ramp. Releasing the cart from the same location on the ramp gave their initial velocities an uncertainty of about 2 cm/s. However, even though Cart A’s initial velocity was supposed to be a controlled variable, many students didn’t pay very close attention to it or even try to measure it. Despite this, their data was relatively good. The worst results came from the groups who experimented with Cart B’s mass. Since we lacked enough track to make ramps, we used the spring-loaded plunger on one of the carts to achieve a controlled initial velocity that was uncertain at about the 10% level (maybe 5 cm/s). However, it was easy to have a misfire that would result in a reduced initial velocity. Very few student groups noticed that Cart A’s initial velocity varied wildly in their data. The final tally was that in each of my classes only 1/3 to 1/2 of the data was usable. I would ordinarily send students back to the lab to gather better data. I still may do it, since understanding how to gather data is a more important skill than deriving conservation of momentum from experimental data. However, that’s not what I did.
When students gathered data, I had them fill out all six variables below, not just their independent and dependent variables. This confused them at first but encouraged them to record values and/or measure each of them, which did help them to assess the quality of their data. Unfortunately, when they went to graph it, they were confused about what mattered and didn’t. I probably should have just told them to graph something and then we could argue about whether it was useful later and why. Instead I just grumbled impatiently at students like they were asking me how to breathe. Not super helpful. The, even less helpfully, I gave them what they should graph. Anyway, here’s what they recorded, though I encouraged them to choose the order of the variables that made most sense to them:
A’s mass [cart] | B’s mass [cart] | A’s initial velocity [cm/s] | A’s final velocity [cm/s] | B’s initial velocity [cm/s] | B’s final velocity [cm/s] |
---|---|---|---|---|---|
(will be 0 since B’s initially at rest) | (will be the same as A’s final velocity) |
The first day that I realized the data was bad, we just looked at the limiting behavior that we would expect for really small and really large masses. We came up with analogies for each situation, and sketched the graphs. This effectively broke the linear models students fit to their data, but they stared blankly because they hadn’t really gotten that far with their terrible data anyway. Was this useful? I don’t know yet.
I made simulations…bunches of them:
# Modified version of https://github.com/gcschmit/vpython-physics/blob/master/inelastic%20collision/inelastic.py ### INITIALIZE VPYTHON # ----------------------------------------------------------------------- from __future__ import division from visual import * from physutil import * from visual.graph import * ### Initial conditions: # ----------------------------------------------------------------------- mA = 1 # cart mB = 1 # cart vA = 24 # cm/s vB = 0 # cm/s class QuantityField: def __init__(self, pos=vector(0,0,0), variable='', delimeter=' = ', unit=''): try: self.variable = variable self.delimeter = delimeter self.unit = unit self.label = label(pos=pos, text=(variable+delimeter+'?'+unit), box=False) except TypeError as err: print('Wrong types passed to QuantityField constructor') print(err) raise err def update(self, q): try: self.label.text = (self.variable+self.delimeter+str(q)+self.unit) except TypeError as err: print('Wrong type passed to QuantityField update method') print(err) raise err ### SETUP ELEMENTS FOR GRAPHING, SIMULATION, VISUALIZATION, TIMING # ------------------------------------------------------------------------ scene = display(title="Inelastic collision", width=600, height=400, background = color.black) scene.autoscale = False scene.range = 2.0 # Define scene objects (units are in meters) objA = sphere(radius = 0.1*pow(mA,1./3), color = color.green) objB = sphere(radius = 0.1*pow(mB,1./3), color = color.blue) # Set up motion map for ball 1 motionMapA = MotionMap(objA, 10, # expected end time in seconds 10, # number of markers to draw labelMarkerOrder = False, markerColor = objA.color, markerScale=0.5, markerType="breadcrumbs") # Set up motion map for ball 2 motionMapB = MotionMap(objB, 10, # expected end time in seconds 10, # number of markers to draw labelMarkerOrder = False, markerColor = objB.color, markerScale=0.5, markerType="breadcrumbs") # Set timer in top right of screen timerDisplay = PhysTimer(0, 1) # timer position (units are in meters) mADisplay = QuantityField(vector(-1, 0.8, 0), variable="A's mass", unit=' cart') mADisplay.update(mA) mBDisplay = QuantityField(vector(1, 0.8, 0), variable="B's mass", unit=' cart') mBDisplay.update(mB) ivADisplay = QuantityField(vector(-1, 0.6, 0), variable="A's init.vel.", unit=' cm/sec') ivADisplay.update(round(float(vA),2)) ivBDisplay = QuantityField(vector(1, 0.6, 0), variable="B's init.vel.", unit=' cm/sec') ivBDisplay.update(round(float(vB),2)) vADisplay = QuantityField(vector(-1, 0.4, 0), variable="A's velocity", unit=' cm/sec') vBDisplay = QuantityField(vector(1, 0.4, 0), variable="B's velocity", unit=' cm/sec') ### SETUP PARAMETERS AND INITIAL CONDITIONS # ---------------------------------------------------------------------------------------- # Define parameters objA.m = mA # mass of ball in kg objA.pos = vector(-1, 0, 0) # initial position of the ball in(x, y, z) form, units are in meters objA.v = vector(vA/100, 0, 0) # set the velocity vector objB.m = mB # mass of ball in kg objB.pos = vector(1, 0, 0) # initial position of the ball in(x, y, z) form, units are in meters objB.v = vector(vB/100, 0, 0) # set the velocity vector # Define time parameters t = 0 # starting time dt = 0.01 # time step units are s def v_from(obj): "helper routine to give cm/sec units from the object's velocity" return round(obj.v.mag*100, 2) ### CALCULATION LOOP; perform physics updates and drawing # ------------------------------------------------------------------------------------ while mag(objA.pos) < 2 and mag(objB.pos) < 2 : # while the balls are within 2 meters of the origin # Required to make animation visible / refresh smoothly (keeps program from running faster # than 1000 frames/s) rate(1000) # Position update objA.pos = objA.pos + objA.v * dt objB.pos = objB.pos + objB.v * dt # check if the balls collided if mag(objA.pos - objB.pos) < ((objA.radius + objB.radius) / 2): # calculate the total momentum totalMomentum = (objA.m * objA.v) + (objB.m * objB.v) # calculate the velocity of the combined ball objA.v = objB.v = totalMomentum / (objA.m + objB.m) # Update motion map, timer motionMapA.update(t)#, objA.v) motionMapB.update(t)#, objB.v) timerDisplay.update(t) vADisplay.update(v_from(objA)) vBDisplay.update(v_from(objB)) # Time update t = t + dt ### OUTPUT # -------------------------------------------------------------------------------------- # Print the final time and the ball's final position #print(', '.join(map(str,["A's mass", "A's initial velocity", "B's mass", "B's initial velocity", "Final mass together", "Final velocity together"]))) #print(', '.join(map(str,[mA, vA, mB, vB, mA+mB, v_from(objA)]))) #print(','.join(map(str,["A's mass", "A's initial velocity", "A's final velocity", "B's mass", "B's initial velocity", "B's final velocity"]))) #print(','.join(map(str,[mA, vA, v_from(objA), mB, vB, v_from(objB)]))) print(','.join(map(str,["A's mass", "B's mass", "A's initial velocity", "A's final velocity", "B's initial velocity", "B's final velocity"]))) print(','.join(map(str,[mA, mB, vA, v_from(objA), vB, v_from(objB)])))
I also threw together a spreadsheet that would automatically calculate the final velocities given the two masses and Cart A’s initial velocity. Then, we played the Hypothesis Game to get students used to finding patterns. Some of mine were too hard, and some were too easy. I need to keep track of those, but that’s a project for another day. Once they had some success finding patterns, we turned to a class speed-round of the Hypothesis Game (but really more like people just sharing their observations&emdash;more cooperation than competition). We took turns by having each group suggest some data to try. This was much cleaner than student data but not as much fun. We came up with data like this (leaving off the two least useful variables, Cart B’s initial and final velocities):
A’s mass [cart] | B’s mass [cart] | A’s initial velocity [cm/s] | A’s final velocity [cm/s] |
---|---|---|---|
75 | 1 | 25 | 24.7 |
100 | 27932 | 1000 | 3.6 |
5 | 2 | 20 | 14.3 |
1 | 2 | 3 | 1.0 |
20 | 6 | 11 | 8.5 |
7 | 9 | 20 | 8.8 |
Hmmm, these kids would play Mastermind by choosing random permutations. I suggested something with equal masses and asked what they noticed. We made a connection to the graphs for those who plotted Cart A’s final velocity versus Cart A’s initial velocity and came up with a slope of about 1/2. I asked what would happen if I doubled the masses. They guessed, and we tried it. Then I told them to pick a combination of masses with a total mass of 5. It looked like this (sorted):
A’s mass [cart] | B’s mass [cart] | A’s initial velocity [cm/s] | A’s final velocity [cm/s] |
---|---|---|---|
0 | 5 | 60 | 0 |
1 | 4 | 60 | 12 |
2 | 3 | 60 | 24 |
3 | 2 | 60 | 36 |
4 | 1 | 60 | 48 |
5 | 0 | 60 | 60 |
The equal mass cases that I suggested were enough to get students to look at the fraction , and it wasn’t too long before they realized that this fraction was the same as , and they made rules that turned out to be . We checked it by making predictions and then trying them to see if they were correct. What was nice is that I didn’t have to tell anyone that their formula was wrong. We just tested it.
Here’s what I don’t get. I can turn the above equation for into by careful mathematical steps. I even tried it with one exceptionally clever class who figured it out a day before my other classes, but I think I might have melted their brains. For kids just out of algebra and now in geometry, manipulating equations with six variables seemed like torture. How do I put the focus on the product of mass and velocity as an interesting conserved quantity? How do I go from the simple equation to a deep physical principle?
Based on John’s comment below, I wanted to think about the implicit axioms we use in physics when coming up with formulas.
Doing so ensures that it will be maximally useful: We don’t have to know what’s going on at two different timer readings to calculate our conserved quantity .
Why does this have to be a sum? The formula has to be well-defined independent of our notion of objects, so considering two different objects to be a single object or a single object to be made of two objects shouldn’t change the answer. There are a limited number of ways of doing this. Actually, one could exponentiate both sides to made it a product instead, or if it’s a product, take the logarithm of both sides to make it a sum.
When dealing with two objects ( and ) colliding, we thus want to ensure that our conservation equation has the form
where is a function of the mass and velocity.
I don’t think that this is the way I’d teach it. It might be enough to leave off the second axiom and just look for a functional form
and notice that it has the simple form of the previous equation. When it doesn’t work, what kinds of questions can I ask to focus students on what they are trying?
Something interesting happened while teaching this. My students have one-to-one laptops, which I control strictly. Violate rules on when and how to use them, and you lose all privileges to use laptops until you choose to set up a parent-student-teacher meeting to resolve the issues and decide under what conditions to allow you to use it. This policy has worked well, but that’s a different story. It means, however, that I made a small number of paper copies for those who aren’t currently allowed to use their laptops. I taught the students how to use a spreadsheet to enter a formula and test it to see if their formula using the two mass/velocity pairs was conserved (the same before and after the collision). Presumably, students using paper were partly-ignoring me during this spreadsheet review (difference 1). Entering the spreadsheet formula required four steps (two formulas and two fill-downs), taking up a significant portion of working memory (difference 2). Students working with the spreadsheet tended to focus on different kinds of formulas, whereas students working with the paper copy tended to focus more on the actual numbers and use mental math to see patterns (difference 3). The net result was that of the 5%-10% of my class that found the formula in the 30 minutes I gave them—admittedly low, but I decided to go with the time constraint and have this group explain to the rest—around 2/3 were paper users. I like using spreadsheets, but for a task like this, I think paper wins.
After pointing out that the masses were conserved (individually and together in this case), I did a Think Aloud and looked at the first two rows of data, in which the sum of the velocities was conserved because the object masses were equal. I used this as my idea, which students could immediately see was wrong if they looked at the third data point. This was how I showed them how to enter a formula in the spreadsheet.
Rather than scaffolding with questions, I scaffolded with a series of hints:
I would reveal a new hint when I saw flagging spirits. I was proud that almost all of the students were trying things. Even when it became clear that I was adding hints, they didn’t stop to wait for better hints but kept at it. I was also super proud of the pair of students who worked out the conserved quantity (which violated hints 3 and 4), and we got to discuss which we liked better (being easier to remember, makes more “sense”), or , and what was the connection between them. The extension question was: What do you think would be conserved if there were three objects? four? more?
With N data points, one can easily calculate N-2 velocities and accelerations at the same timer readings. Of course, one can calculate N-1 velocities at intermediate timer readings, but this can make it difficult for students to make spreadsheets for velocity. Using the method above, one can calculate both instantaneous velocity and instantaneous acceleration at the mesh of timer readings by focusing on three data points at a time. This makes it easier to test the validity of the Constant Acceleration Particle Model (CAPM).
I like to throw the formula into a spreadsheet that I use when looking at student data. When they whiteboard in class, I type in their data to see how consistent it is, helping me to assess where students are going wrong and whether their graphs make any sense.
Update: The code below is not as encapsulated as I would like, but it’s still pretty usable. The different versions reflect different amounts of refactoring, but I hope that they give you enough to work with. If you have a question about how to modify one of my examples to product your own favorite version of a stacked graph, please leave a comment, and I’ll try to create one within a reasonable time.
I’ve been using Asymptote to create graphs like the stacked kinematic graphs below. I’ll post more complicated graphs in the future, but here’s a simple one with axes only. I like Asymptote’s ability to use consistent units in the graphs so that the t-axis is synchronized. Although it takes more work to create a version in Asymptote than in a graphics package, I can immediately differentiate it to different problems just by changing a line here and there.
Old version with x/v/a–t graphs: kinematic_stack.pdf kinematic_stack.png
New version (from refactored code) with s/v/a–t graphs: kinematic_stack_pos_vel_acc.pdf kinematic_stack_pos_vel_acc.png (N.B. IB Physics uses the convention that position is given by the variable s, so I try to use it with students, although we also use x).
// Asymptote code for kinematic_stack_pos_vel_acc.asy import graph; pen axis_p = linewidth(1.4)+black; pen grid_p = linewidth(1.0)+gray(0.2); real hticks = 5; real vMin_ticks = -5; real vMax_ticks = 5; void kingraph(picture pic, Label vL="", real vMin=vMin_ticks, real vMax=vMax_ticks, Label hL="$t$", real hMin=0, real hMax=hticks) { scale(pic, Linear, Linear); xlimits(pic, hMin, hMax); ylimits(pic, vMin, vMax); xaxis(pic, hL, YZero, axis_p, Arrow(6)); yaxis(pic, vL, XZero, axis_p, Arrow(6)); } picture pos_pic; kingraph(pos_pic, "$s$"); picture vel_pic; kingraph(vel_pic, "$v$"); picture acc_pic; kingraph(acc_pic, "$a$"); //xequals(pos_pic,3,Dotted); //xequals(vel_pic,3,Dotted); //xequals(acc_pic,3,Dotted); // boring code for stacking the graphs. The only interesting part is the htick/vtick settings, which can be used to change the size of the horizontal and vertical units of the graphs. void stack(picture pics[]) { real margin=2mm; real htick = .8cm; real vtick = .4cm; frame[] frames = new frame[pics.length]; for(int i=0; i<pics.length; ++i) { unitsize(pics[i], htick, vtick); frames[i] = pics[i].fit(); if (i>0) { frames[i] = shift(0,min(frames[i-1]).y-max(frames[i]).y-margin)*frames[i]; } add(frames[i]); } } stack(new picture[] {pos_pic, vel_pic, acc_pic});
With numbered ticks: kinematic_stack_pos_vel_acc_nticks.pdf kinematic_stack_pos_vel_acc_nticks.png
// Asymptote code for kinematic_stack_pos_vel_acc_nticks.asy import graph; pen axis_p = linewidth(1.4)+black; pen tick_p = linewidth(1.0)+gray(0.2)+fontsize(8); pen ticklabel_p = tick_p; int hticks = 5; int vMin_ticks = -5; int vMax_ticks = 5; real[] hTicks_a = sequence(1, hticks); real[] vTicks_a = sequence(vMin_ticks, vMax_ticks); ticks hTicks = Ticks(format=Label("$%.4g$", align=E, p=ticklabel_p), Ticks=hTicks_a, Size=1mm, pTick=tick_p); ticks vTicks = Ticks(format=Label("$%.4g$", align=W, p=ticklabel_p), Ticks=vTicks_a, Size=1mm, pTick=tick_p); real axis_extra = 0.7; // extend the axis just a bit past the last tick mark axis VZero(bool extend=true) { return new void(picture pic, axisT axis) { axis.type = 0; // Value axis.value = pic.scale.x.T(pic.scale.x.scale.logarithmic ? 1 : 0); // I'm good with Linear 0 axis.position = 1; // relative position of axis label axis.side = left; axis.align = 5*E; axis.extend = extend; }; } axis VZero = VZero(); axis HZero(bool extend=true) { return new void(picture pic, axisT axis) { axis.type = 0; // Value axis.value = pic.scale.y.T(pic.scale.y.scale.logarithmic ? 1 : 0); // I'm good with Linear 0 axis.position = 0.5; // relative position of axis label axis.side = right; axis.align = W; axis.extend = extend; }; } axis HZero = HZero(); void kingraph(picture pic, Label vL="", real vMin=vMin_ticks-axis_extra, real vMax=vMax_ticks+axis_extra, Label hL="$t$ [sec]", real hMin=0, real hMax=hticks+axis_extra) { scale(pic, Linear, Linear); xlimits(pic, hMin, hMax); ylimits(pic, vMin, vMax); xaxis(pic=pic, L=hL, axis=VZero(false), p=axis_p, ticks=hTicks, arrow=Arrow(6), above=false); yaxis(pic=pic, L=vL, axis=HZero(false), p=axis_p, ticks=vTicks, arrow=Arrow(6), above=false); } picture pos_pic; kingraph(pos_pic, "$s$ [m]"); picture vel_pic; kingraph(vel_pic, "$v$ [m/sec]"); picture acc_pic; kingraph(acc_pic, "$a$ [m/sec/sec]"); //xequals(pos_pic,3,Dotted); //xequals(vel_pic,3,Dotted); //xequals(acc_pic,3,Dotted); // boring code for stacking the graphs. The only interesting part is the htick/vtick settings, which can be used to change the size of the horizontal and vertical units of the graphs. void stack(picture pics[]) { real margin=2mm; real htick = .8cm; real vtick = .4cm; frame[] frames = new frame[pics.length]; for(int i=0; i<pics.length; ++i) { unitsize(pics[i], htick, vtick); frames[i] = pics[i].fit(); if (i>0) { frames[i] = shift(0,min(frames[i-1]).y-max(frames[i]).y-margin)*frames[i]; } add(frames[i]); } } stack(new picture[] {pos_pic, vel_pic, acc_pic});
With unnumbered ticks: kinematic_stack_pos_vel_acc_ticks.pdfkinematic_stack_pos_vel_acc_ticks.png
// Asymptote code for kinematic_stack_pos_vel_acc_ticks.asy import graph; pen axis_p = linewidth(1.4)+black; pen tick_p = linewidth(1.0)+gray(0.2)+fontsize(.01); pen ticklabel_p = tick_p; int hticks = 10; int vMin_ticks = -5; int vMax_ticks = 5; real[] hTicks_a = sequence(1, hticks); real[] vTicks_a = sequence(vMin_ticks, vMax_ticks); ticks hTicks = Ticks(format=Label(" ", align=E, p=ticklabel_p), Ticks=hTicks_a, Size=1mm, pTick=tick_p); ticks vTicks = Ticks(format=Label(" ", align=W, p=ticklabel_p), Ticks=vTicks_a, Size=1mm, pTick=tick_p); real axis_extra = 0.7; // extend the axis just a bit past the last tick mark axis VZero(bool extend=true) { return new void(picture pic, axisT axis) { axis.type = 0; // Value axis.value = pic.scale.x.T(pic.scale.x.scale.logarithmic ? 1 : 0); // I'm good with Linear 0 axis.position = 1; // relative position of axis label axis.side = left; axis.align = .01*S; axis.extend = extend; }; } axis VZero = VZero(); axis HZero(bool extend=true) { return new void(picture pic, axisT axis) { axis.type = 0; // Value axis.value = pic.scale.y.T(pic.scale.y.scale.logarithmic ? 1 : 0); // I'm good with Linear 0 axis.position = 1; // relative position of axis label axis.side = right; axis.align = .01*W; axis.extend = extend; }; } axis HZero = HZero(); void kingraph(picture pic, Label vL="", real vMin=vMin_ticks-axis_extra, real vMax=vMax_ticks+axis_extra, Label hL="$t$", real hMin=0, real hMax=hticks+axis_extra) { scale(pic, Linear, Linear); xlimits(pic, hMin, hMax); ylimits(pic, vMin, vMax); xaxis(pic=pic, L=hL, axis=VZero(false), p=axis_p, ticks=hTicks, arrow=Arrow(6), above=false); yaxis(pic=pic, L=vL, axis=HZero(false), p=axis_p, ticks=vTicks, arrow=Arrow(6), above=false); } picture pos_pic; kingraph(pos_pic, "$s$"); picture vel_pic; kingraph(vel_pic, "$v$"); picture acc_pic; kingraph(acc_pic, "$a$"); //xequals(pos_pic,3,Dotted); //xequals(vel_pic,3,Dotted); //xequals(acc_pic,3,Dotted); // boring code for stacking the graphs. The only interesting part is the htick/vtick settings, which can be used to change the size of the horizontal and vertical units of the graphs. void stack(picture pics[]) { real margin=2mm; real htick = .4cm; real vtick = .4cm; frame[] frames = new frame[pics.length]; for(int i=0; i<pics.length; ++i) { unitsize(pics[i], htick, vtick); frames[i] = pics[i].fit(); if (i>0) { frames[i] = shift(0,min(frames[i-1]).y-max(frames[i]).y-margin)*frames[i]; } add(frames[i]); } } stack(new picture[] {pos_pic, vel_pic, acc_pic});
With grid: kinematic_stack_pos_vel_acc_grid.pdf kinematic_stack_pos_vel_acc_grid.png
// Asymptote code for kinematic_stack_pos_vel_acc_grid.asy import graph; pen axis_p = linewidth(1.4)+black; pen grid_p = linewidth(0.8)+gray(0.2); pen ticklabel_p = fontsize(.01); int hticks = 10; int vMin_ticks = -5; int vMax_ticks = 5; real[] hTicks_a = sequence(1, hticks); real[] vTicks_a = sequence(vMin_ticks, vMax_ticks); real axis_extra = 0.7; // extend the axis just a bit past the last tick mark axis VZero(bool extend=true) { return new void(picture pic, axisT axis) { axis.type = 0; // Value axis.value = pic.scale.x.T(pic.scale.x.scale.logarithmic ? 1 : 0); // I'm good with Linear 0 axis.position = 1; // relative position of axis label axis.side = left; axis.align = 1.5*E; axis.extend = extend; }; } axis VZero = VZero(); axis HZero(bool extend=true) { return new void(picture pic, axisT axis) { axis.type = 0; // Value axis.value = pic.scale.y.T(pic.scale.y.scale.logarithmic ? 1 : 0); // I'm good with Linear 0 axis.position = 1; // relative position of axis label axis.side = right; axis.align = W; axis.extend = extend; }; } axis HZero = HZero(); void kingraph(picture pic, Label vL="", real vMin=vMin_ticks, real vMax=vMax_ticks, Label hL="$t$", real hMin=0, real hMax=hticks) { scale(pic, Linear, Linear); xlimits(pic, hMin, hMax); ylimits(pic, vMin, vMax); ticks hTicks = LeftTicks(format=Label(" ", align=E, p=ticklabel_p), Ticks=hTicks_a, extend=true, pTick=grid_p); // The space clears the labels on the ticks. ticks vTicks = RightTicks(format=Label(" ", align=W, p=ticklabel_p), Ticks=vTicks_a, extend=true, pTick=grid_p); xaxis(pic=pic, L="", axis=BottomTop, p=grid_p, ticks=hTicks); yaxis(pic=pic, L="", axis=LeftRight, p=grid_p, ticks=vTicks); xaxis(pic=pic, L=hL, axis=VZero(false), p=axis_p, ticks=NoTicks, arrow=Arrow(6), above=true); yaxis(pic=pic, L=vL, axis=HZero(false), p=axis_p, ticks=NoTicks, arrow=Arrow(6), above=true); } picture pos_pic; kingraph(pos_pic, "$s$"); picture vel_pic; kingraph(vel_pic, "$v$"); picture acc_pic; kingraph(acc_pic, "$a$"); //xequals(pos_pic,3,Dotted); //xequals(vel_pic,3,Dotted); //xequals(acc_pic,3,Dotted); // boring code for stacking the graphs. The only interesting part is the htick/vtick settings, which can be used to change the size of the horizontal and vertical units of the graphs. void stack(picture pics[]) { real margin=0mm; real htick = .4cm; real vtick = .4cm; frame[] frames = new frame[pics.length]; for(int i=0; i<pics.length; ++i) { unitsize(pics[i], htick, vtick); frames[i] = pics[i].fit(); if (i>0) { frames[i] = shift(0,min(frames[i-1]).y-max(frames[i]).y-margin)*frames[i]; } add(frames[i]); } } stack(new picture[] {pos_pic, vel_pic, acc_pic});
Blank position and velocity graphs only: kinematic_stack_pos_vel.pdf kinematic_stack_pos_vel.png
// Asymptote code for kinematic_stack_pos_vel.asy import graph; pen axis_p = linewidth(1.4)+black; pen grid_p = linewidth(1.0)+gray(0.2); real hticks = 5; real vMin_ticks = -5; real vMax_ticks = 5; void kingraph(picture pic, Label vL="", real vMin=vMin_ticks, real vMax=vMax_ticks, Label hL="$t$", real hMin=0, real hMax=hticks) { scale(pic, Linear, Linear); xlimits(pic, hMin, hMax); ylimits(pic, vMin, vMax); xaxis(pic, hL, YZero, axis_p, Arrow(6)); yaxis(pic, vL, XZero, axis_p, Arrow(6)); } picture pos_pic; kingraph(pos_pic, "$s$"); picture vel_pic; kingraph(vel_pic, "$v$"); // boring code for stacking the graphs. You can change the stack statement at the bottom to choose which graphs to include in what order. The only interesting part of the stack function is the htick/vtick settings, which can be used to change the size of the horizontal and vertical units of the graphs. void stack(picture pics[]) { real margin=2mm; real htick = .8cm; real vtick = .4cm; frame[] frames = new frame[pics.length]; for(int i=0; i<pics.length; ++i) { unitsize(pics[i], htick, vtick); frames[i] = pics[i].fit(); if (i>0) { frames[i] = shift(0,min(frames[i-1]).y-max(frames[i]).y-margin)*frames[i]; } add(frames[i]); } } stack(new picture[] {pos_pic, vel_pic});
from pylab import * # not needed if run from ipython --pylab Independent_Variable = arange(0, 5, 0.1) # You should enter some data here, like [0.0, 0.1, 0.2, 3.1] Dependent_Variable = sin(Independent_Variable) # You should enter some data here like [-3.2, 44.8, 91.2, 5.0] Dependent_Variable_Error = 0.1*abs(randn(len(Dependent_Variable))) Fit_Coeffs = polyfit(Independent_Variable, Dependent_Variable, 2) # 2 is for 2nd order polynomial Dependent_Variable_Best_Fit = polyval(Fit_Coeffs, Independent_Variable) figure() errorbar(Independent_Variable, Dependent_Variable, yerr=Dependent_Variable_Error, fmt='o', label='experimental run 1') # fmt='o' makes dots plot(Independent_Variable, Dependent_Variable_Best_Fit, label='best fit quadratic polynomial') # You can enter mathematics in the labels between dollar signs, like 'best fit $y=ax^{2}+bx+c$' xlabel('Independent Variable axis label (IV units)') ylabel('Dependent Variable axis label (DV units)') legend() # This adds a legend using the "label=" part of the commands above. title('Scatter plot with error bars') # Be descriptive! What is the context? show() # Needed when not running from ipython
This code generates a graph that can be exported to a PNG image file, which looks like this:
We whiteboard a lot in my class, and although it has been expensive to try so many things and is labor intensive, I still haven’t spent more on cleaning supplies than what three or four group-size regular dry erase boards would cost. I’m collecting what I’ve personally tried to remove dry-erase marker stains from shower/panel board (a cheap whiteboard solution with a not totally smooth melamine surface).
I’ve used mainly Expo markers. Expo 2 markers (low-odor) ghost quite badly, so I tried to use regular Expo markers. Red and purple are the worst colors. I had students use mainly black, blue, brown, green (which leave a haze behind), and red (which ghosts). When I run out of Expo markers this year, I’m going to get AusPen refillable dry-erase markers. I’ve tried them on my whiteboards, and in rough order of easy of erasing (high to low) the colors are black, orange, blue, green, red, purple. I’m going to try black, orange, blue, and green, but orange isn’t the darkest color.
I’ve used Expo dry-erase erasers and rags made from cotton T-shirts (ok but they stain forever) and old sheets (modal and high—actually too high—thread count cotton, which stain also). Our janitor swears by old sheets from hotels, which have a much lower thread count. Even when stained, rags continue to work well dry. I have students use one set of “erasers” dry and another set (not actual erasers!, just rags) wet. Dry rags get most of the dry-erase marker marks, but some colors ghost worse than others.
For my classes, I use my old high-thread count modal sheet rags to replace my aging supply of old Expo dry-erasers, both for dry-erasing. The Expo dry-erasers can be washed with soap and water, but it’s something I only do about twice per year. I use my old cotton undershirt rags for wet-erasing. I wash the whole lot of rags after a week of heavy use or every few weeks of light use.
Note: Since GMS Surface Tech was kind enough to send me two of their products to try, I should emphasize that my informal tests don’t at all suggest how well their products would work on regular whiteboards, only on fake-whiteboard surfaces like shower boards.
See also the section below on rubbing compound.
And the ones I can’t yet report on because I haven’t tried them yet:
I’m always looking for that magical treatment that would make my whiteboards pristine, but I haven’t found it yet. Maybe one day I will try the whiteboard paint on top of one after I sand it down…
For applying these surface treatments, I use a fine sponge or rag to spread them, rubbing lightly. I buff them with a microfiber cleaning cloth.
On Twitter, Dan L (@d2thelhurst) asked (editing of [vector] is mine):
Hey physics, what do you think of “cos only” method of finding [vector] components? Compl angle and only use cos instead of same °, other trig func
His motivation was that students have trouble remembering which vector component requires cosine versus sine. For a long time, I’ve wanted to collaborate with our 9th grade geometry teachers to run a few quick trigonometry measurement labs, in which students take data in physics class and analyze/use the data in math class. I’d want to do it Modeling-style, where we don’t give them names for the ratios, just try to understand the relationship between angles and sides in a right triangle. Students would (hopefully) come away with a better understanding of similarity and trigonometric ratios. However, until the day comes that I finally write said curricular materials, I had the idea of making a trigonometric “slide rule” that students could use to look up angles and determine the cosines and sines (without necessarily using those names).
See the Trigonometric Slide Rule on GeoGebraTube. This version is the easiest to use and gives you answers. No graph reading. Of course, if we are using a computer, we might as well use the computer to solve the full polar to rectangular problem, but if you want students to understand ratios, it’s probably best to leave them a little mental work.
One version blanks the grid outside the circle, and one version has a grid outside the circle. Students should be able to estimate the cosine and sine of angles to 2 decimal places.
This version lists the coordinates of points every 5 degrees. The coordinates have 3 decimal places.
I used Asymptote to create the graphics. I have my version set to output PDF by default, but otherwise, here’s the sourcecode:
import graph; defaultpen(fontsize(10)); pen thick_p = linewidth(1.5); pen axis_p = black+fontsize(8); pen grid_major_p = gray(0.5)+linewidth(1.0); pen grid_minor_p = gray(0.7)+linewidth(0.5); pen circle_p = thick_p+black; pen radial_p = black; pen radial_accent_p = linewidth(1.5)+radial_p; pen degree_p = black; real tick_major = 0.1; real tick_minor = 0.02; real tick_low = 0.97; real tick_high = 1.03; int tick_every = 5; // letter paper with 0.5" margins: real width = 8.5 inches - 2*0.5 inches; real height = 11 inches - 2*0.5 inches; size(width, height); scale(true, true); xlimits(-1.1,1.1); ylimits(-1.1,1.1); real axis_extend = 1.0; real xmin = -axis_extend; real xmax = axis_extend; real ymin = -axis_extend; real ymax = axis_extend; real dummy(real x) { return 1.001*x; } draw(graph(dummy,-1.0,1.0),invisible); pen thin=linewidth(0.5*linewidth()); xaxis("",axis=LeftRight,axis_p,xmin=-1.1,xmax=1.1,Ticks(format="%",beginlabel=false,endlabel=false,Step=tick_major,step=tick_minor,begin=true,end=true,extend=true,pTick=grid_major_p,ptick=grid_minor_p),above=false); yaxis("",axis=LeftRight,axis_p,ymin=-1.1,ymax=1.1,Ticks(format="%",beginlabel=false,endlabel=false,Step=tick_major,step=tick_minor,begin=true,end=true,extend=true,pTick=grid_major_p,ptick=grid_minor_p),above=false); xaxis("",axis=YZero,axis_p,xmin=xmin,xmax=xmax,LeftTicks(beginlabel=false,endlabel=false,Step=tick_major,step=tick_minor,begin=false,end=false,NoZero,extend=false,pTick=axis_p,ptick=grid_minor_p),above=false); yaxis("",axis=XZero,axis_p,ymin=ymin,ymax=ymax,RightTicks(beginlabel=false,endlabel=false,Step=tick_major,step=tick_minor,begin=false,end=false,NoZero,extend=false,pTick=axis_p,ptick=grid_minor_p),above=false); draw((-1,0)--(1,0),axis_p+thick_p); draw((0,-1)--(0,1),axis_p+thick_p); path unitsquare = (-1,-1)--(-1,1)--(1,1)--(1,-1)--cycle; //filldraw(Circle((0,0),1)^^(scale(1.1)*unitsquare),evenodd+white,white); // mask the grid outside the circle for(int angle = 1; angle < 360; ++angle) { if (angle % tick_every == 0) continue; draw(tick_low*dir(angle)--dir(angle),radial_p); } string angle_label; for(int angle = 0; angle < 360; angle+=tick_every) { draw(0.97*dir(angle)--tick_high*dir(angle),radial_accent_p); angle_label = "$"+format("%d",angle)+"^{\circ}$"; //angle_label = "$"+format("%d",angle)+"^{\circ}\ ("+format("%#1.3f",Cos(angle))+","+format("%#1.3f",Sin(angle))+")$"; // for cheat sheet label(rotate(angle)*Label(angle_label),tick_high*dir(angle),dir(angle),degree_p); } draw(Circle((0,0),1),circle_p);