Posts filed under ‘statistics’

ICTCM & What’s On The Way

I just got back from the 24th ICTCM, and, as usual, I am full of great ideas that I picked up. There were two trends that I saw – Pencasts and Simulations.

There were several talks on using smart pens. I recently received one as a gift, and I think that it is becoming a very important tool for me as an educator. They can be used to communicate to students who email questions, post supplementary materials online, or even as a note taking tool. I will share my approach in future blogs.

There were also several talks that incorporated the use of simulations in introductory statistics. This is a BIG idea that will one day revolutionize the way we teach inferential statistics. Resampling and bootstrapping are effective ways to show our visual students what is really going on. I usually use StatCrunch for this purpose.

Matt Davis (Chabot College) did a great job, and he has some fantastic simulations. You can find him through a quick Google search for “Matt Davis Chabot”, and he seems pretty willing to share. (Tell him I sent you.)

I gave a talk on my new mastery based learning approach using MyMathLab, which incorporates elements of game design. I think that my students have changed their focus from doing homework to earn points to doing homework to learn and understand mathematics. I will be putting together a blog series on this new approach that will share last semester’s results, explain exactly how I set my class up, and share the elements of game design that I have incorporated, as well as some commentary from my son.

One other blog series that I will put together is one on my MyMathLab top 10-ish features. Now that most classes have gone to the new design of MyMathLab, it’s time to go over these features.

I’m looking forward to getting this blog rolling again. I hope you have all been well.

-George

I am a math instructor at College of the Sequoias in Visalia, CA. If there’s a particular topic you’d like me to address, or if you have a question or a comment, please let me know. You can reach me through the contact page on my website – http://georgewoodbury.com.

March 26, 2012 at 1:45 pm Leave a comment

2011 AMATYC Wrap Up

This year’s annual AMATYC conference in Austin just wrapped up and, as usual, was very inspiring. One topic that was presented several times was the idea of incorporating simulations to teach hypothesis testing. I have used this recently in my own statistics class, and I can report that it helps students to conceptually understand hypothesis testing in general and p-values as well.

I will be trying to recruit several AMATYC presenters to share their work by writing a guest blog. Stay tuned!

-George

November 14, 2011 at 4:35 pm Leave a comment

Setting Up Binomial and Poisson Probability Problems

I have uploaded 2 videos to YouTube that go over how to set up binomial and Poisson probability problems. (There are 8 binomial problems, and 6 Poisson problems.) I go over the steps for identifying the problems, as well as give the correct answers. If you’d like a copy of the actual problems, just drop me a line.

Binomial: http://www.youtube.com/watch?v=jLAePWjEZYE
Poisson: http://www.youtube.com/watch?v=GupBzWFL-KY

– George

I am a math instructor at College of the Sequoias in Visalia, CA. If there’s a particular topic you’d like me to address, or if you have a question or a comment, please let me know. You can reach me through the contact page on my website – http://georgewoodbury.com.

October 2, 2011 at 8:06 pm Leave a comment

StatCrunch – Binomial Probability Calculator

I just finished up a short unit on binomial probabilities with my Intro Stat class, using StatCrunch as the primary method of calculating probabilities. To access the calculator in StatCrunch, click the Stat button, and select Binomial from the Calculator option.

Here is the interface.

Enter the number of trials in the box labeled n and the probability of success on one trial in the box labeled p. When it comes to the number of successes, you have several options: <= (for \le), => (for \ge), <, >, or =. Enter the appropriate value for x once you have selected the option you need and press Compute.

By the way, if you need to find something like P(3 \le x \le 7), you will have to do it in 2 steps. (StatCrunch does not have a “between 3 & 7” option.) First, find P(x \le 7), and then subtract the result you get from P(x<3).

One of the features I like is the graphical display of the probabilities. The values of x that you are working with are displayed with red bars.

By making the actual calculations easier, I find that we can spend more time on challenging problems. My students also have a better understanding of the big picture, instead of getting lost in the weeds with their calculators and 20 pages of binomial probability tables.

– George

I am a math instructor at College of the Sequoias in Visalia, CA. If there’s a particular topic you’d like me to address, or if you have a question or a comment, please let me know. You can reach me through the contact page on my website – http://georgewoodbury.com.

September 29, 2011 at 9:37 am Leave a comment

Estimating the Mean & Standard Deviation from a Frequency Distribution?

This week I covered a unit of descriptive statistics – mean, median, mode, quartiles, range, standard deviation, variance, … One section that I struggled with was the one on estimating the mean & standard deviation of a set of data based on a frequency distribution.

As far as estimating the mean goes, I can understand covering this. We want a value that we can consider “typical”. But might it just be better to discuss the median as the estimated measure of central tendency? In other words, isn’t knowing that the median is towards the beginning of the 40-49 class almost as useful as knowing that the estimated mean is 42.3?

The calculation for the estimated mean is not difficult, so covering it will not overwhelm the students. The same is not true for estimating the standard deviation. That is one tedious process, and distracts students from the big picture. I stopped covering that in class long ago, and have not felt one bit of remorse.

Here’s my question – if there were an online tool that made it easy to calculate these two measures (input number of classes, midpoint & frequency for each class), would there be any justification to continue to teach these two topics by hand? I like how the TI-84 handles these calculations, but I do not use the calculator in my classes. If there was a web site that did these calculations just like the TI-84, I would have my students quickly calculate the mean and standard deviation and interpret their results.

So, can you make the case that I should continue to teach my students to make these estimates by hand? I’d love to hear your thoughts.

– George

I am a math instructor at College of the Sequoias in Visalia, CA. If there’s a particular topic you’d like me to address, or if you have a question or a comment, please let me know. You can reach me through the contact page on my website – http://georgewoodbury.com.

September 5, 2011 at 3:32 pm Leave a comment

Intro Stats – Sampling Techniques Activity

During the first week of class I go over different sampling techniques – Convenience, Random, Systematic, Cluster, & Stratified. Even though most of our sampling during the semester will be convenience sampling, I think it’s important for students to understand the other types that are available. It gives me a chance to talk about what I feel is one of the major themes of statistics, the tradeoff between easy/less reliable and difficult/more reliable. Basically we are looking for a practical approach that yields quality results.

I use Mike Sullivan’s Intro Stats text, and yesterday I used one of his activities to show the different sampling techniques in action. I asked my students two questions. How much did you spend this semester on books and supplies? Do you own an iPhone? We had two variables, one quantitative and one qualitative. I told the students that through sampling techniques we would try to estimate the mean cost for students in this class as well as the percentage of students in this class that own an iPhone.

Random Sampling

I began by numbering the students from 1 through 44. I then used Microsoft Excel to select random numbers until we had a sample of 10 students, wrote the data for those students on the board, and we calculated the sample mean and proportion.

Systematic Sampling

Students struggle with systematic sampling in the homework because they are given an abstract situation, asked to calculate a step value, and give the number of the 47th individual selected. It is more effective to show systematic sampling by actually taking a sample. We had 44 students and I wanted a sample size of 10, so my students told me that a step size of 4 would work. (44/10) We randomly selected a starting value using Excel, and then sampled every 4 students from there. Students started to understand that this was pretty similar to the way we used to count off in gym class to pick teams. Once I had the data on the board, we calculated the sample mean and proportion.

Cluster Sampling

I have 6 rows of tables in my classroom, so we made each row a cluster. We selected a row at random (Excel) and sampled each individual in that row. In my experience students do pretty well with the idea that cluster sampling can be done by dividing the population geographically by location and sampling each individual in the selected clusters. Again, once the data was on the board we calculated the sample mean and proportion.

Stratified Sampling

I began by asking my students which strata we could use to categorize students, and they came up with gender pretty quickly. It is pretty easy to use gender as opposed to year in school, religious affiliation, … because it is information we already know. I took a sample of 6 female students and 3 male students, as my class roughly has a 2:1 ratio of females to males. We renumbered students within each group and used Excel to randomly select students. Once the data was on the board we calculated the sample mean and proportion.

Wrap Up

Once we were done sampling I collected all of the information for the entire class and calculated the population mean (~ $315) and proportion (~38%). We then compared our sample statistics to the population parameters, and the students really got to see that individual samples vary. This is a really BIG idea as we head towards inferential statistics.

I feel that my students got a much better handle on sampling techniques and on sampling in general. Next week I will show them how to use StatCrunch to draw a random sample.

Do you have any sampling activities that you use and really like? Or any other activities that you’d like to share? Leave a comment or drop me a line – maybe we can arrange a guest blog!

– George

I am a math instructor at College of the Sequoias in Visalia, CA. If there’s a particular topic you’d like me to address, or if you have a question or a comment, please let me know. You can reach me through the contact page on my website – http://georgewoodbury.com.

August 23, 2011 at 10:51 am 3 comments

Statistics – Levels of Data?

In the last couple of years I have shifted to a technology-based approach in the Intro Statistics classes I teach. I am now in the process of trimming out topics that I don’t feel are necessary. First topic to consider: Levels of Data.

Students need to understand the difference between qualitative and quantitative data, as that skill will help them to determine which hypothesis test is appropriate in a given situation. But do we need to cover the different levels of data – nominal, ordinal, interval, and ratio? I don’t think so.

Later in the course there is no advantage gained by determining whether data are interval level or ratio level. There is one situation with qualitative data where knowing whether data are nominal level or ordinal level, and that is when it comes to finding the median of a data set. Since we cannot find the median if we cannot sort the data in ascending order, we cannot find the median of nominal level data. I feel I can cover this idea in about 10 seconds when I introduce the concept of the median, so why spend time in the first class session on these levels of data? I figure by cutting this topic out I can spend another 10-15 minutes talking about the “big picture” of statistics, and that is more valuable to my students.

OK, this may not seem like a big deal, but I know by trimming unnecessary topics from this course that I will be able to incorporate more inferential topics at the end of the course. I’d love to have more time to devote to regression. I want to be able to cover some nonparametric tests.

So, how do you feel about covering levels of data? What are the topics that you think can be removed from the traditional intro stats course? As the semester progresses I’ll be sharing my thoughts, and I’d love to hear yours.

– George

I am a math instructor at College of the Sequoias in Visalia, CA. If there’s a particular topic you’d like me to address, or if you have a question or a comment, please let me know. You can reach me through the contact page on my website – http://georgewoodbury.com.

August 17, 2011 at 5:47 am 1 comment

Older Posts


Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 1,501 other followers

June 2017
M T W T F S S
« Aug    
 1234
567891011
12131415161718
19202122232425
2627282930  

Categories