My Website              Contact Dr. Wei



EDCI 5544

Erin Strack

Class Wrap-up for Feb. 16th

Chapter 4: Research variables, validity and reliability

*Useful chapter for quantitative research

Variable Types

·        Variables are features or qualities that change

§         For example, the age of students in our class, or their years of teaching experience

·        Experimental research explores whether there is a relationship between variables and a constant

§         Dependent Variable = Independent Variable + Constant  (DV = IV + C)

·        Relationships between independent variables and dependent variables

§         We can have multiple dependent and independent variables but always only one constant

§         We always must have a constant, and we must know the relationship between the dependent and independent variables (Ex: Counting cards when playing poker)

·        Independent and dependent variables are the two main types

§         There are other types of variables but these are the two most important

·        The independent variable is the one we think will cause the results

§         We always know the independent variable first, it will cause the dependent variable

·        The dependent variable is the one we measure to see what effects the independent variable has on it

§         We cannot write IV = DV + Constant because the independent variable has an effect on the dependent variable, not the other way around

§         Ex: Student achievement is DV and teaching methods are IV, then Student Achievement = Method 1 + Method 2 + Constant

·        Moderator variables: characteristics that may result in an interaction between an independent variable and other variables

§         IV---(variable)---DV, if I increase IV then I must increase DV through variable

§         Ex: A professor states that students with a strong sexual desire will perform better in class, but he forgets about a moderator variable such as students with a strong sex drive have more energy so they can study longer and that’s the reason they perform better

·        Gender or length of residency can also be considered a moderator variable

·        Intervening variable are moderator variables that researchers cannot recognize- it’s impossible to identify all intervening variables

§         Moderator variable are variables that the researchers know about and can control

§         Intervening variables are variables that the researchers don’t know about or don’t know what effect they can cause

§         We don’t care about moderator or intervening variables, we just want to control them so they don’t become the focus of the study

§         Ex: If worried about gender as a moderator variable, then just have equal number of boys and girls in each experimental group

§         Control variables: Control the variable that might interfere with the relationship between dependent and independent variables

§         It is very difficult to control all variables, sometimes we can identify them but not control them

§         Solution 1: Control variable is language background, balance the number of students who speak different languages

§         Ex: If you have 5 students who speak Chinese then have 5 students who speak Japanese

§         Solution 2: Only include students from one language background

§         Ex: All students are speakers of Chinese only


§         Researchers provide working or operational definitions when a variable is difficult to measure

§         Ex: Student achievement is difficult to define, so we use a tool (a test) to create a working or operational definition of student achievement.

§         Ex: When we study motivation we must have a quantifiable number to create a operational definition so we use a tool (a survey), always must use numbers


Scales of Measurement

§         Three commonly used scales: nominal (Male = 1 and Female = 2), ordinal (students’ scores) , and interval (age, years of residency)

§         You can change your scale, for example Female = 1 and Male = 2, or student scores can be out of 500 instead of 100, but you must be consistent throughout your entire study



§         If we want to generalize our study to a broader population, we must first make sure our research design is valid

§         If it is not valid, we cannot generalize results at all

§         Content validity: refers to the representativeness of our measurement

§         You must test what you teach and teach what you test, items on test must be representative of what was learned, a test is not valid if it does not measure what was learned or if students can challenge during a test “We never learned this.”

§         If you are testing final sound identification, you must have examples of both consonant and vowel ending sounds in test

§         Face validity: refers to the instrument the researcher is using and how easily they can convince others that it has content validity

§         It can be difficult to find experts (someone that has experience with the survey or tool you are using, or a Ph. D. in your area ), you must find at least 3 experts to review your instrument and at least 2 of those 3 must agree that it has face validity, easier to find experts if you work at a University

§         Construct validity: refers to the degree to which the research captures the construct of interest, such as language proficiency or aptitude, and cannot be measured like height or age, stronger construct validity if the instrument considers multiple factors

·        Ex: To define reading achievements, have students read real words, pseudo words, and letter sounds, if testing multiple aspects of reading then content validity is stronger to suggest student can read

§         Criterion-related validity: refers to the extent to which tests used in research study are comparable to other well-established tests of the construct in question

·        Ex: You can create a test and compare it to TOEFL, if students who do well on your test also do well on the TOEFL then your test has criterion-related validity, or if a classroom assessment is highly correlated to the MAP test then it has criterion-related validity

§         Predictive validity: whether the instrument can predict performance of other measures

·        Ex: If I create a test that’s highly correlated to the TOEFL then I can predict the students who do well on my test will do well on the TOEFL

·        In order to have productive validity you must use a 0.05 standard, if 100 students pass your test so you predict they will all pass the TOEFL, only 4 students out of that 100 can fail the TOEFL for your test to have predictive validity

·        Study materials for GRE have predictive validity, if you do well on these study materials then you will do well on the GRE

§         Internal validity: the degree to which a study is logically sound and free of confounding variables, control all moderator and intervening variables if you can identify them

·        We must have internal validity before we can try for external validity, first it must be effective in my classroom and then I’ll try it in a different classroom


Participant Inattention and Attitude

§         Hawthorne Effect: refers to improvements in quality which are not the result of intended changes to working conditions but due mainly to the fact that workers are aware they are being observed

·        Ex: A study states brighter lights will increase productivity of workers, productivity does go up but it’s because workers know they are being observed

§         Halo Effect: participants may try to please researchers by giving the answers or responses they think are expected

·        Participants usually want to make themselves look good or to please, Ex: if given a survey about drug usage most would answer that they’ve never done it before regardless of the truth

·        These are moderator variables and must be controlled by researchers

·        Ways to control effects: give participants a 3 second time limit to answer survey questions so they must answer with their initial instinct, hide observers so participants do not know they are being watched, make multiple observations and only use one, introduce the observer as only a teacher helper (it’s okay to withhold information as long as we have a good reason and the information is later disclosed after data collection)

§         Negative effect: fatigue and boredom, students may choose all 5’s or all 1’s on Likert scale without even reading questions

·        30 questions is good enough, 50 questions is too many- if students answer the same for every question then their results must be thrown out

§         Solve this by presenting items in different orders

·        We do this to control the situation, to force students to read each question, if you include negative sentences however you must remember to switch the Likert scale when reviewing the data (do not switch Likert scale on survey itself)

§         General inattentiveness: participants give different answers to the same question when it’s asked more than once, could be because participants are unsure of their answer, solve this by giving the same test items in different order

·        Counterbalancing: give an item 2 times to make sure participants know and are sure of their answer, if they are not sure then they are wrong

·        If they answer the item right both times it is given then they know it, if they answer it right only one time then they are confused, and if they answer it wrong both times then they don’t know it


Participant Maturation

§         Maturation is most relevant in longitudinal studies and particularly those involving children

·        Longitudinal surveys are not just one picture of one moment in time, but several pictures to compare results

§         Solution: have a control group to compare to so we know whether the effect is due to treatment or natural maturation

·        You must have a control group to show student achievement is due to your treatment rather than natural maturation

·        Use a pre-test and post-test, if experimental group perfoms higher than control group then it’s because of your method


Data Collection: location and collector

§         Data could be different

-         In a quiet or noisy environment

-         At school or at home (some illegal immigrants will not feel comfortable answering questions at school, Ex: prisoners may answer questions differently if Sheriff is present at that time)

-         Whether collectors are native speakers or not (if data is collected verbally this is important)

-         Whether collectors are nice or not (participants will perceive you in a certain way, appearance plays a huge role, especially for women)

-         Whether collectors are trained or not and how well they are trained (must be trained by the same person to ensure they are receiving exactly the same information)

·        Make sure the situation is exactly the same throughout the entire study


Instrumentation and test effects

§         Equivalence between pre- and post tests, must be equal in difficulty

·        Ex: Do NOT make a hard pre test and an easy post test in order to show growth

§         Solution: make the pretests and posttests at the same time, pilot test them to make sure the results are the similar, keep vocabulary the same difficulty, split the test in half (create 30 questions at the same time, then randomly pick 15 to be on the pretest and 15 to be on the posttest)


External validity

§         Refers to the extent to which the results of a study can be applied to circumstances outside the specific research setting in which a particular study was carried out, we must first have internal validity

·        Ex: If a plant grows well with you, sell seeds and see if it grows well somewhere else

§         Sampling is the process of selecting units from a population of interest so that by studying the sample we may fairly generalize our results back to the population as a whole

§         In random sampling, every individual has an equal chance to be selected into the sample

·        We must follow a scientific sampling process before we can generalize our results

§         Three common sampling methods:

-         Simple random sampling: Ex: drawing names out of a hat

·        This is the best but it is almost impossible in ESL, very difficult

-         Stratified random sampling: proportions of the subgroups are determined, then participants are randomly selected from each stratum

·        Ex: If we arranged the class into groups such as white, Korean, Arab, etc and then pulled 10 people from each group, still must be random

-         Cluster random sampling: the selection of groups rather than individuals as objects of study

·        Ex: Name each province in Thailand as a cluster, then take 10 people from each cluster (always get more participants than you need), this can cost a lot of money

-         Make sure you keep data separated into its group because reliability will be calculated for each group, not individually

·        Tell how many clusters or classes you have and calculate validity by groups

·        Very few people mention sampling methods here in the US

§         Nonrandom sampling

§         Systematic sampling is the choice of every nth individual (out of a list of 100 people, you take every 5th)

§         Convenience sampling is the selection of individuals who happen to be available for the study (I see you, help me!)

§         Purposeful sampling is when researchers knowingly select individuals based on their knowledge of the population (If I live in China, I will use Chinese students in my study because I know them and their culture)



§         Refers to consistency, a similar test score for two administrations of the same test

·        Scored on a scale of zero to one, 0.85 or higher is good

§         Rater reliability: scores by two or more raters are consistent

-         Interrater reliability is the measure of whether two or more raters judge the same set of data in the same way and of whether one rater judges the same set of data at different times in the same way (Ex: When two people score papers they should have the same results within 0.85)


Instrument reliability

§         Test-retest reliability: the same test is given to the same group of participants at two times (should get similar results)

§         Internal consistency

-         Split-half procedure: odd number items correlate with even numbered items of the same test, use Spearman-Brown prophecy formula (calculate reliability)

-         Kuder-Richardson 20 and 21 can be calculated by SPSS

-         Cronbach’s reliability can be calculated by SPSS (most famous)


Journal Article Presentations

Quantitative Article by Hyunsim Kim:

Positive Feedback in Pairwork and its Association with ESL Course Level Promotion by D. Reigel

Purpose: To prove that positive feedback rate is an effective predictor of course level promotion


- 2 ESL classrooms at Portland State University, students in basic to low intermediate level

- Took note of positive feedback from teacher and other students

- Defined positive feedback as one of the following: praise,, affirmation, laughter, or nonverbal clues


-         The mains forms of positive feedback were affirmation and laughter

-         Found that student characteristics do not significantly affect promotion

-         Found that positive feedback DOES significantly affect level promotion

Conclusion/Classroom Implications

-         Positive feedback is related to level promotion

-         Teachers should increase positive responses and encourage interaction between students

Qualitative Article by Phillip Johnson

The Literacy and Development of Kindergarten English-Language Learners by Luisa Araujo

Purpose: Explored how a literature-based curriculum supported the literacy growth of ESL Kindergarteners


-         A year-long investigation at a K-8 school where Portuguese was the predominate language

-         Researchers were participant observers and never interfered with classroom instruction

-         Collected data and conducted interviews


-         Methods such as circle reading, phonics/handwriting, and journal writing were central to Kindergarten literacy curriculum

-         Cannot judge students’ ability to read and write based on how well they speak

-         Circle reading: allowed them to use both languages, stimulated background knowledge, allowed children to see themselves as readers

-         Journal writing: does not model writing or check for accuracy, allowed inventive spelling

-         Phonics/handwriting: allowed them to use both languages, wall charts, introduce beginning sounds in English, practice writing letters


-         A balanced literacy program supports ELL’s

-         Limited English proficiency did not hinder them from learning literacy


Dustin Cornwell

EDCI 5544

Instructor:  Dr. Wei

February 8, 2017

Qualitative Research Article Analysis Report


1.  Han, Y., & Hyland, F. (2015). Exploring learner engagement with written corrective feedback in a Chinese tertiary EFL classroom. Journal of Second Language Writing, 30, 31-44.

 2.  The investigators are Dr. Fiona Hyland, Associate Professor of Education at the University of Hong Kong, and Ye Han, a PhD candidate at the same university.  Ye Han’s doctoral research “focuses on learner engagement with written corrective feedback in Chinese universities” (Han & Hyland, 2015, p. 44).   Dr. Hyland has researched the topic of feedback for more than 30 years and has published a book and many journal articles about the topic.  According to the University of Hong Kong website, her other research interests include second language writing and “autonomy and language learning and self-access resources” (HKU Scholars Hub, 2017).

 3.  This is a case study.  The authors closely examine and analyze the writing of a small number of participants gather insights and possible further topics for research.

 4.  The purpose of the study is to “investigate the errors that individual learners made, the WCF (written corrective feedback) provided to these learners, and how they cognitively, behaviorally, and affectively engaged with that WCF…” (Han & Hyland, p. 33).  The purpose is both theoretical and practical, as the authors seek to “help teachers to enhance their WCF” (Han & Hyland, p. 31). There were five focusing questions:

            a. What linguistic errors do individuals in a Chinese tertiary-level EFL classroom make?

            b. What WCF is provided to address these errors?

            c. How do these learners cognitively process that WCF?

            d. In what ways do these learners use that WCF when revising their drafts?

            e. How do these learners affectively respond to that WCF? (Han & Hyland, p. 33).

 5.  The authors’ rationale for this investigation is that while previous research has provided information about aspects of engagement with WCF, there is not much research that attempts to combine the three aspects of engagement:  cognitive, behavioral, and affective (Han & Hyland, p. 32).  The authors believe it is important to take a broader view of these aspects rather than just focus on one aspect, as much of the previous research in this field has done.

 6.  The study takes place at an unnamed university in southeastern China.  The four students who participated in the research study were selected from a classroom of 25 EFL students in a level three (intermediate) EFL course.  The selected students were chosen because they were considered to be “average” students, typical of the class.  This evaluation was based on three data points:  grades close to the class average, in-class performance as assessed by one of the researchers, and comments from the EFL class instructor (Han & Hyland, p. 34).   Three of the four students were female, and one was male.  They were all first or second year undergraduate students; two majored in Chinese, one in public administration, and one in engineering (Han & Hyland, p. 34). 

 7.  The study took place over a period of 5 weeks.  Twenty hours of class sessions were observed and recorded throughout the 5-week period.  In the first week, the researchers interviewed the teacher and the students.  In week 2, they examined the first draft of the students’ 5-paragraph essays as well as the teacher’s WCF.  Then in the third week, two of the students voluntarily met with the teacher to talk about their writing, which the researchers recorded.  Also, all four students gave a verbal report about their thoughts about the WCF they had received.  In the fourth week, the students submitted a final draft of the essay.  In the fifth week, the researchers conducted a final interview with the teacher and the students (Han & Hyland, p. 34).  Questions included “Would you like your teacher to change the way he/she gave feedback on linguistic errors to you?” and “What did you do with the linguistic errors in your first draft?” (Han & Hyland, p. 42). 

 8.  Data were collected in person by the two researchers.  They observed and audiotaped 20 hours of classroom instruction.  The researchers took notes while observing and recording.  The teacher provided the researchers with the syllabus, lesson plans, and grading policies.  Also, the researchers interviewed students and asked for feedback in the students’ native language; these interviews were also recorded for transcription. 

 9.  The authors were strictly observers during the actual classes.  As the article noted, “No intervention was made in the research period” (Han & Hyland, p. 34).  However, the authors participated in the interviews conducted with the students at the beginning and the end of the 5-week study.  They also interviewed the teacher several times during the study.

 10.  The authors noted that there were two parts to the data analysis.  The first was the text analysis of the students’ drafts and the WCF they received, and the second part was a qualitative analysis of “interviews, verbal reports, teacher-student conferences, field notes, and class documents” (Han & Hyland, p. 34).  It was not indicated whether computer software was used.  It is unlikely that it was used for the text analysis since each text was read by multiple human coders, but it is possible that some type of spreadsheet or statistical software was used to calculate error rates.

            For the first part of the analysis, different categories of WCF were coded.  The four categories were Direct WCF (the teacher corrected the error), Indirect WCF (the teacher underlined or circled the error, but did not correct it), Indirect WCF with revision clues (the teacher used editing symbols to indicate where a word was missing, for example), and Indirect WCF with clarification requests (the teacher underlined the error and put a question mark above it or wrote a question about the text) for the student (Han & Hyland, p. 43).  Then, the error rate per 100 words was calculated for each student, and also the most frequent type of error was indicated for each of the four students.

            In the second part of the analysis, the researchers read each participant’s data file multiple times and created a narrative of each student’s level of engagement with WCF (Han & Hyland, p. 35).  The researchers generated codes for certain types of WCF and placed these into related groups.  An extra coder (not one of the researchers) was used to determine the effectiveness of the coding categorization.  “The final inter-coder agreement rates for cognitive engagement, behavioral engagement, and affective engagement were 93.1%, 97.3%, and 87.5%, respectively” (Han & Hyland, p. 35). 

 11.  One of the main findings was that word choice errors were the most common type of WCF on first drafts.  Additionally, the instructor was quite consistent about using Indirect WCF most commonly over the other three types of WCF.  This type of feedback requires the most effort on the part of the student to correct because no clues are given as to how it should be corrected.    Then, in the results section, the authors wrote specific findings about each of the four participants.  One participant, Ying, was described as the most motivated, but she also experienced the most sadness or embarrassment about her errors because she felt so strongly about writing English correctly.  The researchers found that Ying was actively and successfully engaged with WCF (Han & Hyland, p. 36). 

            Another participant, Song, was described as overconfident and under-engaged.  She had much less interest in correcting her errors than Ying did, and she was much less successful at doing so since she didn’t deeply engage with the feedback and the resources she had available to help her (such as dictionaries).  As the class progressed, she became increasingly resistant to WCF because she was overwhelmed by the process (Han & Hyland, p. 38).

            Dai, like the other students, also felt negative emotions because of the numerous errors identified through WCF.  She often seemed confused by the WCF and wasn’t very successful at correcting her errors.  Her way of correcting errors was mainly to consult a dictionary, and the researchers remarked that while she successfully corrected some of the errors, she exhibited little cognitive understanding of the errors she made.  The authors felt she needed to more deeply engage with her WCF and use resources more effectively to understand the source of her errors.

            The final student, Lin, was not actively engaged in the WCF process, and he felt confused by the feedback.  He worked with a classmate not involved in the research to edit his writing, but this classmate was not able to help him learn the sources of his errors.  Lin did not seem particularly interested in correcting his errors since he had fewer errors than his classmates and felt that it wasn’t necessary to put too much effort forward to make the writing perfect (Han & Hyland, p. 40).

 12.  The research methods emphasize a very detailed, deep look at the data collected from each of the four case study participants.  Rather than simply counting the number of errors or the type of errors, the researchers attempt to learn more about the thought processes of each student and how they interpret their teacher’s WCF.  Not only do they interview the students at more than one point during the study, they also interview the teacher to understand her perceptions of the students and their language challenges.  The use of a third reader to calibrate the coding rubric, as noted above in question 10, demonstrates that the researchers want to ensure that their data is credible and accurate. 

13.  I was impressed by the depth of the data gathered about each student; in particular, the commentary about the students’ emotional states as they received the WCF.  This struck me as particularly powerful because students’ emotions can both help and hinder the learning process, and emotion is often overlooked in second language research, particularly in studies that are quantitatively focused.  The qualitative methods employed in this research enabled the researchers to delve in to small details about the learning process and identify key differences among the participants. 

14.  The main limitation of this study is that it is a very small sample of only four students and one teacher.  While I think it provides information that could help direct further research, it would be difficult to generalize these results.  It would be interesting to learn if ESL students living in an English-speaking country would react similarly or differently to WCF than the EFL students learning English in China did in this study.  Also, another limitation is that only Chinese students were used in the study.  Due to cultural differences, students in Latin America or Africa might react differently to WCF than the students in this small study did.

 15.  One of the things that I noticed immediately is that the teacher’s most common type of feedback was Indirect WCF.  This type of feedback does not provide any hints or context as to what the problem is in the sentence.  Because the students in this study were not advanced English learners, I wondered as I read through the results and discussion if the WCF might have been more effective if it had included revision clues or clarification requests.  At times, it might also have been appropriate to use Direct WCF in cases where the student clearly had great confusion about a word choice or sentence structure.  While more research is necessary, I would postulate that beginning learners need more Direct WCF and Indirect WCF with revision clues, while very advanced learners would probably do well with mainly Indirect WCF.  Intermediate learners, like those in this case study, probably should receive a balanced mix of these types of WCF.  Having additional help on some items would give students confidence in their ability to correct their errors, while at the same time they would be challenged by a few items that had only Indirect WCF.


Han, Y., & Hyland, F. (2015). Exploring learner engagement with written corrective feedback in a Chinese tertiary EFL classroom. Journal of Second Language Writing, 30, 31-44.

HKU Scholars Hub: Researcher Page for Hyland, F (n.d.). Retrieved January 27, 2017, from



Lauri Cheng

EDCI 5544

Michael Wei

Qualitative Research Article Report

8 February 2017

1.      Varghese, M., & Johnston, B. (2007). Evangelical Christians and English language    teaching. TESOL Quarterly, 41(1), 5-31. Retrieved from

2.      There are two investigators in this study. They both had previously done research involving moral dimensions of teaching and in language teacher identity. Neither of them are Christians but they both became aware of the large number of evangelical Christian teachers in the TESOL field and how little attention they had received in research. They both wanted to understand the beliefs and motivations behind this group of teachers and also do a part in making them more visible within the field. They acknowledged that their personal backgrounds might have influenced the results of their studies so they informed their participants of their positions relating to this topic. Both investigators are atheists but “Manka was brought up as a Catholic, and Bill belonged for some time to the Church of England.” (Varghese & Johnston, 2007, p. 13). They both had knowledge of nonevangelical forms of Christianity and of some other religions but they both describe having very little understanding of evangelical beliefs.

3.      This is a qualitative research study involving narratives. The study was conducted by in-depth interviews by the authors to students enrolled in ESL teaching preparation programs at Christian colleges. Interviews were the best way to understand how these teachers’ religious beliefs related to their teachings as the participants could state in their own words what kind of personal beliefs they had and not be represented or spoken for by someone that did not share those beliefs.

4.      The purpose of this study was to understand the relationship between ESL teachers and evangelical Christian beliefs. Many of these teachers are motivated by their religious beliefs and mission work to pursue ESL teaching. These beliefs are so intertwined with their teaching yet much of the TESOL field does not reveal this aspect of these teachers. The study is mainly just to create more dialogue on this topic since there seems to be a lack of it. It does also seem to be relatively personal since the two investigators acknowledge several times that they both do not have much knowledge of this issue because of their personal backgrounds.

5.      The theoretical framework is based around the idea that mission work and English language teaching (ELT) have historically been linked. Unfortunately, this kind of language conversion during mission work was a means of colonization or subordination of indigenous people. However, now that religious groups are no longer affiliated with the state or federal government in their mission work, there is a sense of separation between ELT and religious beliefs that isn’t completely true. There are more and more active Christians in the United States and English speakers in the entire world every day and the link between missionary work and ELT is inextricable. There is still much debate about how to separate professionalism in teaching to personal religious beliefs and the attempt to convert non-believers, and whether or not these issues even should be separated. However, the authors did try to make this study as objective as possible and even states, “Given the newness of this research topic, and our own unfamiliarity with it, we hesitated to adopt a strong priori conceptual framework, preferring rather to focus on an initial information-gathering approach followed by tentative moves toward theorization.” (Varghese & Johnston, 2007, pp. 13-14.)

6.      The participants in this study were made up of 10 undergraduate students at two Christian colleges, West University and Southern College. West University is located in the Pacific Northwest and Southern college in the South. The investigators wanted to make sure they included two separate regions. All of the participants were in their early twenties and studying in undergraduate programs preparing English language teachers. The investigators said they chose young participants very new to their ELT careers because they were interested in “trying to capture the processes by which perspectives on religion and teaching were being formed, rather than focusing on older and more experienced teachers whose ideas may be more fixed.” (Varghese & Johnston, 2007, p. 12). All of the participants identified as evangelical Christians. About half are described as planning to teach outside the United States and the others intended to work in the United States. The interviews were conducted at the schools the participants were studying in.

7.      The investigators first decided on the two universities they would use to find their participants and conduct the interviews. They used flyers, class visits, and personal contacts with faculty at the two schools. The interviews were done in 40-90 minutes for each participant. They were then transcribed for analysis and then returned back to the participants for corrections or comments. After all of the information was collected, the researchers divided the information into distinct sections. The first section presented were vignettes of three participants to emphasize the individuality of each participant so people are not as quick to generalize the teachers as their idea of evangelical Christians. The second section examined themes that emerged across the interviews. The third section identifies the “significant moral dilemma that arises out of the encounter between English language teaching and evangelical Christianity.” (Varghese & Johnston, 2007, p. 14).

8.      It was not mentioned exactly how the interviews were recorded. The questions the researchers asked during the interview was posted to the Appendix section of the research study but it is not clear if these interviews were video-taped or recorded or if the answers were written down. It does mention that the interviews were transcribed and sent back to the participants so I would assume they were recorded.

9.      This was not really a field study. I would say that the authors were certainly not participants as they were not interested in joining an evangelical church or partaking in mission work or rituals specific to the group of participants they were studying. For this reason, I would say they were observers as they were trying to be as objective as possible and not influence their participants’ responses to the interview questions.

10.  The interview data was analyzed in two ways. First, a “rudimentary content analysis revealed important aspects of the ways in which the teachers’ religious beliefs related to their teaching.” (Varghese & Johnston, 2007, p. 14). Then, the researchers used a Bakhtinian approach to analyze the discourse of the interviews and the devices the teachers drew on in formulating their beliefs. The Bakhtinian approach was not really described in details but Bakhtin is referenced occasionally throughout the explanations and analysis of the responses given by the participants.

11.  The results found that although there were many different ideologies across the participants who all identified as an evangelical Christian, there were still some recurring themes. One of the characteristics of being an evangelical Christian is activism. This can include spreading the gospel and teaching people about Jesus but done in more covert ways. Many of the participants agreed that they should not necessarily be obviously trying to bring non-believers to God but to lead by example and “plant seeds” to encourage people to find their path to this belief. Because this religion affects so much of each participant’s life, it obviously affected his or her career choices. Many of the participants thought that teaching ESL was a form of activism and an opportunity to reach more people who may not know about their religion. The participants also explained a sense of marginalization from the dominant culture in the United States because of their religion. The researchers thought this sense might have made these future teachers sympathize more with minority groups learning ESL such as immigrants or refugees. While all the teachers agreed that teaching was a part of their evangelical duty in serving and being active in their community, they did also all agree that a sense of professionalism is to be maintained. The researchers showed many accounts of participants stating that they felt ESL teachers should go into the profession with the intention of teaching ESL and not anything else. They all acknowledge the opportunity that teaching platform can give them but they did all understand that it was deceitful to say they were teaching English when their agenda was actually to try to convert their students.

12.  The interviews seemed to be the best form of research method because of the very personal subject this study touches on. Religion is extremely subjective depending on the person being observed or studied and the person conducting the study. Interviews provide the exact articulations of the people that identify with the religion being studied while the commentary from the authors also provide a perspective from the outside of this group. Since the purpose of the study was to mostly just collect information from a very present group in the TESOL field that has been sort of unrepresented for so long, this kind of qualitative research study achieved that goal well with the research methods used.

13.   I think it’s interesting to see the motivations for future ESL teachers. Most of the aspiring teachers I know are drawn to the field for similar reasons. Many people including myself really enjoy people of different cultures and with different ideas than our own. This kind of intercultural communication has been a consistent feature throughout my life in maintaining friendships and making meaningful connections with others. It seems that most of the participants in this study also had a similar explanation. However, one of the participants was described as being from a wealthy family and growing up in basically an all-white neighborhood. She kept saying how fascinated she was by other cultures and other people. The authors pointed out the kind of patronizing undertone this statement could have and how the position ESL teachers have over their students could be very political and dangerous. I became more aware of how I would describe and consider my attitude towards others. I certainly do not want to seem patronizing when expressing my admiration or curiosity for another culture. I also do not want to use ESL as a tool to convert immigrants to a dominant majority culture within the United States. This was not described as a goal by the participants but some of their discussions and responses could definitely be interpreted that way.

14.  There are definitely not enough participants in this study to really do a thorough analysis of these issues. It would also be interesting to see how the responses may change as these students graduate their programs and gain more experience in the field actually teaching. The study was also very broadly focused. The information was all interesting and helpful in gaining a very general sense of how some evangelical Christians view their beliefs intertwined with teaching. However, there were many connections made that could have been narrowed in on. Because this kind of topic for research was relatively new to the field, perhaps in the future researchers can create a more specific framework to conduct future studies.

15.  The idea of professionalism within the TESOL field really stood out to me. Obviously every teacher will have his or her own set of beliefs and values that will be a part of their every day life, their demeanor, teaching style, social interactions, etc. Teachers are sort of a moral leader for their students whether they are intending to be or not. We must be very careful to not impress our own beliefs onto a student in the process of teaching. Teachers should all have a common goal in creating a safe and welcoming environment in their classrooms. Students should never feel pressured into adapting a set of beliefs into their lives. The goal should always be focused on teaching the content area and hidden agendas should not exist.




Lauri Cheng

EDCI 5544

Michael Wei

Quantitative Research Article Report

1 March 2017

1.      Crowther, D., Trofimovich, P., Saito, K., & Isaacs, T. (2015). Second language comprehensibility revisited: Investigating the effects of learner background. TESOL Quarterly, 49(4), 814-837.

2.      This is a quantitative study using correlational research. The authors used this study to determine which linguistic variables contributed to the accent and comprehensibility of an ELL’s speech. They also wanted to compare different L1s to see if the linguistic variables were similar across a variety of L1s or if there were distinct differences.

3.      The research was mostly based around the raters’ analysis of speech. The recordings used for the raters to review came from an unpublished corpus. The speakers were all noted to be international students enrolled in undergraduate and graduate programs in an English-medium university in Montreal, Canada. The original corpus had these speakers perform five speaking tasks but only the picture narrative task was used for the study as there has been previous research done with this same task. The speakers had thirty seconds to describe what was happening in a picture slide show. The speech was recorded directly onto a computer and saved as an audio file. The samples were also transcribed by a trained research assistant. The audio files and transcripts were used by the raters for this research study. The raters were all native English speakers. Each rater attended four individual 2-hour sessions all occurring within 3 weeks of each other. A personal laptop was placed in front of the rater and instructions were given on how to use the program to evaluate speech. Raters also got the opportunity to practice to ensure understanding before actually hearing the real data. The rater would evaluate the recordings or transcripts by using a computer program with a sliding scale from 0-1000.

4.      The purpose of this study was to investigate how comprehensibility and accent is affected by an ELL’s L1. “One key component of speaking ability is pronunciation, which has typically been discussed with reference to two broad constructs, namely, understanding and native likeness.” (Crowther, Trofimovich, Saito, & Isaacs, 2015, p. 815). The authors identified that research had only been done in examining one group of a specific L1s influence on L2 speaking or just how L1 influenced overall success in L2. There are many factors that determine how a native speaker could understand a non-native speaker. The research aimed to pinpoint exactly what factors are issues in comprehension and what factors influence pronunciation. The questions raised by the researchers include “which linguistic variables in L2 speech contribute to listener perception of comprehensibility and accentedness” (Crowther, et al., 2015, p. 819) and “whether and to what degree the relative contributions of these linguistic variables remain generally problematic across a range of speakers or differ as a function of their L1 background.” (Crowther, et al., 2015, p. 819).

5.      The researchers mention that there have been studies identifying linguistic influences on comprehensibility and accent by comparing an L1 with an L2. It has widely been agreed that a speaker’s L1 will certainly have an influence on his or her L2. However, much of the research has only identified very specific variables rather than comparing the difference between variables in how they affect comprehension. There has been vast research on phonological aspects of L2 production with L1 influence. The question that remained unresolved was which linguistic factors relating to comprehensibility and accented are specific to a speaker’s L1. To further investigate this question, this study involves a variety of L1 participants with the same L2 and a variety of linguistic variables that could contribute to accent and comprehension.

6.      Recorded speech and transcripts of these samples were studied by native English speakers. The recordings were of 45 speakers picked from 143 speakers all performing the same speaking tasks. The groups were chosen by the most widely spoken languages at the university these speakers were enrolled in. The groups consisted of speakers of Chinese (Mandarin), Hindu-Urdu, and Farsi as their L1 backgrounds. There were 15 speakers in each group. The reason Hindu and Urdu were grouped together is because “the principal difference between these languages is script-based” (King, 1994). The speakers were also chosen based on their TOEFL and IELTS scores which were all similar to each other and their speaking scores were sufficient for them to pursue academic degrees. The differences between the L1 languages were quite pronounced also making the speakers appropriate for this type of study. The authors mention that in terms of rhythm, Farsi is stress-timed, Hindi is non-Romance syllable timed, and Chinese is tonal. Their audio recordings were available for the raters to evaluate as well as the transcriptions taken from these recordings. The raters were not necessarily the ones being studied or evaluated but they were crucial to this investigation. The raters consisted of 10 native English speakers born and raised in English-speaking homes. The raters also resided in Montreal, which is where the international students were enrolled in a university. Montreal is a bilingual French-English city so the raters reported speaking English on average 89% of the time. This is important to note as the raters were exposed on a regular basis to other languages besides just English. They were all either enrolled in or recently completed their graduate studies in applied linguistics. They also on average had 6.6 years of L2 teaching experience. These raters were selected because of their experience in linguistics as to achieve more consistent evaluations.

7.      First, the recordings were made and transcribed by the speakers. Then, the raters evaluated three tasks. The order was different for each rater as to ensure reliable results. One task was to evaluate the audio recording for accent and comprehensibility. They were only able to listen to the recording one time. The raters used a sliding scale on a computer program from 0-1000. For evaluating accent, the scale was described as 1 being “heavy accent” and 1000 being “no accent at all.” For comprehensibility, 1 was “hard to understand” and 1000 was “easy to understand.” Another task involved the sliding scale to evaluate phonology and fluency. This time, the raters could listen to the recordings as many times as they felt necessary to achieve an accurate evaluation. The raters again were asked to evaluate audio files of the recorded speech and to use the sliding scale to rate segmental errors, word stress errors, intonation, rhythm, and speech rate. In the third task, the raters analyzed lexis, grammar, and discourse structure by reviewing the written transcripts of the audio files. Again the scale was used to evaluate lexical appropriateness, lexical richness, grammatical accuracy, grammatical complexity, and discourse richness. After all three tasks were completed, the raters used 9-point scales to assess the extent to which they understood the categories with 1 being “I did not understand at all” and 9 being “I understand this concept well” and to which they could comfortable and easily use them. The raters’ scores were then averaged for each category for each speaker and then compared.

8.      The picture narrative task used to create the recordings involved an eight-frame colored picture story featuring two strangers bumping into each other while rounding a corner, then accidentally exchanging their identical suitcases, and finally realizing their mistake upon returning home. The narratives were recorded and then edited down to the initial 30 seconds. Transcripts were made of each recording to give to the raters as well. The raters used the same sliding scale to rate various aspects of the narratives. The number corresponding to the scale chosen by the rater was then averaged with the other raters and compared across the different speakers of varying L1s.

9.      The sliding scale evaluated accent and comprehension with “heavily accented” to “no accent at all” and “hard to understand” to “easy to understand.” For segmental errors and word stress errors, the scale went from “frequent” to “infrequent or absent.” For intonation and rhythm, the scale went from “unnatural” to “natural.” For speech rate, the scale went from “too slow or too fast” to “optimal.” For lexical appropriateness, the scale went from “too many inappropriate words used” to “consistently uses appropriate vocabulary.” For lexical richness, the scale went from “few, simple words used” to “varied vocabulary.” For grammatical accuracy, the scale went from “poor grammar accuracy” to “excellent grammar accuracy.” For grammatical complexity, the scale went from “simple grammar” to “elaborate grammar.” For discourse richness, the scale went from “simple structure, few details” to “detailed and sophisticated.” After the raters completed each evaluation with the sliding scale, those numbers were compared amongst raters finding that the raters were very consistent. Because of this consistency, the raters’ average scores for each category was selected for each speaker and for each category. These averages were then compared across the three L1 groups of Chinese, Hindu-Urdu, and Farsi. This kind of data analysis would be categorized as statistical.  

10.  According to the raters’ results for the accent and comprehension review, the Chinese group was rated as being less comprehensible and more accented than the other two groups. The other two groups were not reported to have much difference from each other. For the linguistic categories, the raters’ evaluations were entered into a computer program that found any underlying patterns determined by their clustering. Two factors were distinguished by these results in determining overall comprehensibility, pronunciation and lexicogrammar. Pronunciation involved word stress errors, intonation, rhythm, segmental errors, and speech rate. Lexicogrammar included discourse richness, grammatical complexity, lexical richness, grammatical accuracy, lexical appropriateness, and speech rate. However, speech rate had the lowest interference with comprehensibility. Comprehensibility was associated with pronunciation for the Chinese group, with lexicogrammar for the Hindi-Urdu group, and with neither factor for the Farsi group.

11.  For the Hindi-Urdu group, lexicogrammar was the biggest factor in comprehensibility leading the authors to conclude that the researchers’ understanding for this group was mainly based on the speaker’s lexical, grammatical, and discourse-based choices, not the quality of pronunciation, fluency, or prosody. For the Chinese group, comprehensibility was mainly associated with pronunciation, particularly with segmental accuracy. The researchers believe that this is due to the “substantial crosslinguistic distance between Chinese and English” (Crowther et al., 2015, p. 831). For the Farsi group, none of the linguistic variables had a strong relationship with comprehensibility. The researchers concluded that while this could be contributed to Farsi being the largest group of the three so raters would have more familiarity with this group, they also thought it could be that there just isn’t a single factor with a strong relationship to comprehensibility. Although the Farsi group scored similarly on their TOEFL and IELTS tests, they were tested on overall speaking proficiency and speakers of Farsi could have been more proficient in just those aspects of L2 speech that affect comprehensibility.

12.  The researchers mention especially with the Farsi group that there should be additional research done to isolate linguistic variables that affect comprehensibility, as this study did not find much difference for this group. This research also could be an excellent starting point for future research in analyzing more linguistic features besides phonology in studying relationships between L1 and L2 production. There needs to be a thorough understanding of linguistic influences on comprehensibility so when encountering results like the Farsi group, researchers can use a more nuanced approach. The level of proficiency in the speakers’ L2 is also a big factor that could alter the results. A comparison of beginner, intermediate, and advanced learners could assist in determining which linguistic factors affect comprehensibility.

13.  I’ve only heard very general theories about how an L1 can affect the learning of an L2. I know that there has been some debate and some changes in theory on whether or not an L1 can harm or help an L2. This research was interesting to me because it went into more specifics of how exactly an L1 can affect an L2 and in what aspect of language. It seems that there could even be more research done on other aspects of language besides comprehensibility. However, I believe the goal of learning a language is to be able to communicate effectively and comprehensibility essentially covers that idea. I also appreciate that such diverse languages were used to compare for this study.

14.  The sample size of the speakers was very small. 45 speakers with only 15 speakers in each group may not be as accurate as a larger sample size. Also, the results of the analysis really depended on how knowledgeable the raters were in each category they were rating. Also, as previously mentioned, the speakers’ proficiency may have a major impact on the results of these linguistic variables. This should also be taken into consideration in future research.

15. According to this study, there is no single linguistic variable universally predictive of comprehensibility for speakers from different L1 backgrounds. Teachers should be paying attention to all linguistic variables for speaking proficiency including lexicogrammar aspects, not just pronunciation. Also, for instructors teaching homogeneous groups of L2 learners, this kind of research may assist them in determining which aspects of linguistic variables will be predictive of comprehensibility issues. “In essence, targeting L2 comprehensibility as a learning goal requires an eclectic, comprehensive approach sensitive to the variety of L1s in a language classroom.” (Crowther et al., 2015, p. 833).


Quantitative Research Article Analysis Report

Aroob Alwadie

EDCI 5544

University of Missouri-Kansas City


Quantitative Research Article Analysis Report

1. Short, D. J., Fidelman, C. G., & Louguit, M. (2012). Developing academic language in English language learners through sheltered instruction. TESOL Quarterly, 46(2), 334-361.

2. The study was both an experimental and a comparison study, and it examined the effect of Sheltered Instruction Observational Protocol (SIOP) model instruction on the performance of middle and high school English language learners. The SIOP model was used to teach the content curriculum to new language students. When students are learning a new language, teachers try to make the content concepts accessible and develop the skills of the students. The study was carried out in an English as a second language class in New Jersey using a quasi-experimental design. The research compared two districts: one that applied the SIOP model for improving the performance of English language learners, and the other as a control region.

           3. Teaching methods in different parts of the world are geared towards deepening and broadening a learner’s ability to use a new language. According to Woodrow (2006), in the United States the educational reform movement affects English language learners because states implement standard-based testing irrespective of the student’s English proficiency. English language learners record poor performance in elementary and secondary schools. The reason behind the underachievement is that the students take subject tests using English before they are proficient in it. The solution to the problem is suggested as integrating language development with different techniques that will enable English language learners to comprehend curriculum topics quickly. According to Murphy & Stoller (2007), the technique is known as sheltered instruction, and it focuses on a particular subject curriculum by using a content specialist instructor. The study presented an analysis of how the SIOP model offers a platform for lesson planning and incorporating practices for improving student achievement.

            4. The primary aim of the survey was to examine whether English language learners in a district whose teachers received professional development in the SIOP model would show a higher achievement in oral, writing, and reading proficiency in English. The results would be compared to the control group, which was a district in which the teachers did not have the SIOP model professional development. Additionally, the study sought to investigate if teachers achieve a high level of advancement in the SIOP model during a program of sustained professional development for either one or two years.

            5. Previous studies on the SIOP model focused on how it can be successfully implemented. The studies identified the most appropriate areas of application of the technique and the success factors associated with it. It is reported that in this article, the process was moved to the next step; that is, investigating the effects of implementing the SIOP model on the academic literacy performance of English language learners. The study extended the research into new districts by comparing one with the SIOP model and another without. Moreover, the paper broadened the scope of past studies involving the SIOP model by including high schools.

6. The study involved school districts in New Jersey, which represented the growth in urban learning institutions. Each of the districts had a high school and two middle schools. The treatment district served about 10,000 students, whereas the comparison area served approximately 6,000 learners. The regions had multilingual populations, with about 15-18 native languages.  Most of the teachers involved in the study had volunteered, but some newly hired ones in the treatment district were assigned to the survey. 35 teachers participated in Cohort 1, while 23 participated in Cohort 2. The teachers taught science, mathematics, social studies, special education, ESL, language arts, and technology. The comparison district did not have Cohort groups because professional development of SIOP was not used. In the comparison region, 22 and 19 teachers participated in the first and the second years of study, respectively. The learners who took part in the survey were in grades 6-12 in both the treatment and the comparison districts. 278 English language learners from the treatment region participated in the study, while those from the comparison area were 169; the numbers reduced from 386 and 176 respectively, after scores on the IPT test.

            7. The inquiry was conducted over a two-year period between 2004 and 2005. The researchers began by identifying and selecting the most suitable participants: the school districts, teachers, and students. IDEA Language Proficiency Tests were then administered to the students in the baseline year. Data was collected from the educators and students, and then tabulated. Further data analysis was done and comparisons made between the treatment and comparison districts. Eventually, a discussion of the results was made, and conclusions were drawn.

            8. Classroom data was collected through observation and field notes. The researchers measured the extent to which the SIOP model had been implemented in the treatment district. In the comparison area, the features of the SIOP model incorporated in the English language learning were observed and field notes were recorded. The IPT scores data was then used to determine the impact of the SIOP model on English language development.

            9. The data analysis was done by calculating the means of the IPT scores for each year using ANOVA. The IPT mean proficiency level scores in the treatment and comparison districts was presented in graphical and tabular forms. Line graphs and tables enabled the researchers to draw a comparison between the two regions, and they also allowed for easy tracking of the changes across the year. The data from the second year was analyzed using variance analysis to evaluate if the SIOP training that the teachers had received had any impact on the students’ English language achievement. The Cohen’s D effect size was also calculated, but a longitudinal analysis was not undertaken because few students participated in the IPT testing during the three test administrations. Furthermore, an interclass correlation was calculated to determine if school effects should be accounted for in the analysis of the achievement scores.

            10. The scrutiny of the information collected from the classroom observations and field notes revealed that teachers with the SIOP model professional development incorporated more features of the sheltered instruction as compared to those without the SIOP model of professional development. The two districts had different methodologies, and, hence, there is a possibility of an existing relationship between teacher training and SIOP model implementation. Overall, in the IPT tests, students from the treatment district performed better compared to their counterparts from the comparison district.

            11. The authors concluded that the SIOP is a promising model that can be utilized for professional development and improvement of instructions provided to English language learners. Besides improving the quality of instructions, the SIOP model can improve the achievement of academic language in small urban school districts. The findings support the idea that English as second language instructors should undergo a training program to enable them to grasp the fundamentals of the SIOP model. The sheltered instruction concept applies to contexts where second language learners study content through a new language.

            12. The findings and conclusions of this study apply to other parts of the United States and even to the world as a whole. However, in interpreting the outcome, it is important to note that the teachers involved in the study were in an existing culture of the SIOP model. In normal circumstances, it takes time and resources to train teachers to become better implementers of the technique. Furthermore, changing teacher instructional practice requires workshops, coaching, and technical assistance, and is a long-time endeavor. Cooperative practice, sustained engagement, and thoughtful coordination are necessary for efficient implementation of the SIOP model.

            13. The article fascinates me because of the overall design and procedural nature of the study. The fact that the survey was conducted over a lengthy period makes it reliable. The different comparisons made in the article between the two districts add to the strength and validity of the results. I found the article particularly informative because it provides valuable information relating to the efficient and effective implementation of the SIOP model. The suggestions for future research provided in the article point towards progress in the students’ learning of English as a second language.

            14. The study had two major limitations: voluntary participation of teachers, and its quasi-experimental design. It is possible that the instructors who volunteered to take part in the study were different from those who opted out, and the differences may have affected the outcome. Although the quasi-experimental nature of the survey was an improvement of the design used at the Center for Research on Education, Diversity, and Excellence, it was difficult for the researchers to include large pools of teachers and students, and, therefore, the outcome measures were constrained.

            15. The article is informative and educative, especially to ESL teachers. It shows the importance of systemic literacy directives, and it emphasizes the need for sheltered instructions. The use of several references related to ESL makes the article authentic and recommendable. Additionally, it shows the relationship between effective ESL strategies and the performance of students in the subject. Teachers, policy-makers, and other stakeholders in the educational sector can find the article useful in making decisions concerning ESL intervention models, curricula, and content.



Murphy, J., & Stoller, F. L. (2007). Sustained-content language teaching: An emerging definition. TESOL Journal, 10(23), 3–5.

Woodrow, L. (2006). Anxiety and speaking English as a second language. RELC Journal, 37(3), 308-328.



EDCI 5544

Joe Herdler


Class wrap up for Feb, 20, 2008

Chapter 6, Qualitative research


Class Lecture


Defining qualitative research








Types of Qualitative studies


  1. Ethnography








·             Ethnographic research questions can be fluid and dynamic.


·             The research questions may be revised and reformulated as the research is

                        carried out based on new data that is discovered as the research





·          The projects can be very long term, and the data collection can take many

                        years to perform.


·          Highly detailed and accurate records must be kept.


·          Repeated and careful analysis of the data is required.


·          Multiple sources of data should be used.


·          When submitting an IRB for an ethnographic project, it should not be

                        made too specific.






  1. Case Study







·        It does allow the researcher to study one individual.


·        The project can study one individual class or school.




·        The results should be carefully generalized when it comes to other individuals or when applying the findings to groups of individuals.

  1. The Interview


·        The interview method has three basic formats, Structured, Semi-Structured, and Un-Structured


·        In the Structured format, the researcher asks the same question of every participant of the study. There is no deviation from this standard set of questions.


·        In the Semi-Structured format, the researcher uses a list of questions as a guide, but is not limited to only those specific questions.


·        In the Un-Structured format, the interview is styled more on a normal conversation. The interviewer can ask any thing that they want.


·        In the interview methodology, the interviewer/researcher might have already observed the participants before the interview.


·        Again, there are advantages and disadvantages to the interview method.




·        An interview can allow the researcher to investigate things that are not immediately observable.


·        The interviewer can elicit additional information from the participant if the participant’s answers are vague.


·        The interview can obtain data from participants who are uncomfortable using other data collection methods.


·        The interview can also be held in the L1 of the participant.








·        Whatever data that the researcher needs should be obtained during the interview. It is very difficult to do a second interview, especially with a famous person.


·        If a participant is being “cagey” and refuses to provide the desired data, the researcher must note this and proceed with his interview.


·        An interview can elicit vague and misleading data from the participant. This is especially true with politicians.


·        The subject my give answers and data to an interviewers questions that are self-delusional.


·        “Selective recall” and distortions of perception might also produce misleading or erroneous data.


·        Again, in order to reduce personal bias and prejudice, the researcher’s/interviewers background should be revealed in the methods section of the publication.


During the interview process, the researcher/interviewer should:











4. Observation


·        Using observation, data can be collected through the use of field notes and audio/video (AV) recordings.


·        For highly structured observational studies, the researcher often will use a detailed checklist and or a rating scale. This should be constructed before the observation.


·        For less structured observations, the researcher will most likely use field notes or transcripts of tape recordings of the observed activities.


·        Permission for the observation should be obtained from the participants or their legal guardian.


·        The observer should keep a very low profile while observing.



·        There are advantages and cautions of the observation method:




·        A good observational researcher can gather a lot of data from the participant’s behavior in a reasonable amount of time




·        Observation does not allow researchers to access the participant’s motivation for the observed behaviors. A survey or interview must be used to determine the motivations of the participants.


·        The researcher must be aware of the ‘observer’s paradox’. This is the fact that the presence of the observer can influence the behavior of the observed. A solution to the observer’s paradox is for the observer to become an ‘observer participant’ in the group. This will minimize the observers paradox phenomenon, and is also a good method to find the insider’s viewpoint.


·        There is the possibility of the Hawthorn effect.


·        The ethical issues that are related to participant observation should be considered.


  1. Diary/Journal


In qualitative second language research, diaries can be used in research projects. These diaries are often referred to as L2 journals, or learner autobiographies. A journal can be used to let the learner to write about their L2 learning experiences.


The L2 journal is very important when recording second language learning experiences


Again, there are research advantages and cautions to using the Diary/Journal methods.




·        This method can yield great insight into the L2 acquisition process.


·        A journal/diary can be completed at any time that is convenient to the learner.




·        It takes time to properly keep a regular journal.


·        It could become a burden to the learner.


·        Analyzing the data can be difficult. A long journal can be extremely difficult to analyze properly.



Analyzing qualitative data






Note: The rest of the notes presented in this wrap up are mostly from the slides. The lecture went very fast from here to the break because of a lack of time.






·        Transferability: findings may be transferable to other populations in similar



·        Confirmability: Researchers are required to make available full details of the data

      on which they are basing their claims and interpretations.


·        Dependability: The researcher must make sure that the dependable inferences

      have been generated.





·        Triangulation is when different methods are used obtain the same data, such as

      field notes, surveys, interviews, observations, etc.


·        Theoretical triangulation is when multiple perspectives are used to analyze the

            same set of data.


·        Investigator triangulation is when multiple observers or interviewers are used to

            obtain the same data.


·        Methodological triangulation is when different measures or methods are used to

            investigate a particular phenomenon.


·        Triangulation entails the use of multiple, independent methods of obtaining data

       in a single investigation in order to arrive at the same research findings. This will

       increase the credibility and reliability of the data.


·        Triangulation also reduces the observer or interviewer bias and enhances the

      validity and reliability of the data.



The role of quantification in qualitative research


·        Quantification plays a role in both the generation of the hypothesis and the

      verification of patterns of data reporting.


·        It can also be used for the purpose of data reporting at a later time.


·        It is a valuable too in that the numerical descriptions can make the data readily

            apparent both as to why the researchers have drawn particular inferences and

            how well that their theories reflect that data.


·        It can be useful in determining whether the findings are relevant to other contexts.




Final notes on lecture:


·        When doing long term research, it is important to keep a research journal.


·        When analyzing qualitative data, the researcher should not have any preformed

            hypothesis or notions of what the outcome of the data analysis should be.




Class Presentations


Quantitative Article Presentation

Student name here


Matsumura, S., & Hann, G. Computer anxiety and students’ preferred feedback methods in EFL writing. The modern language journal, 88(3), 403-415.




To examine the effects of computer anxiety on students’ choice of feedback methods and academic performance in English as a foreign language (EFL) writing.




The incorporation of computer technology into the classroom has been accompanied by an increasing number of students experiencing anxiety when interacting with the computer.


Research Design


·              4 beginning level and 4 intermediate level EFL classrooms (TOEFL 400-500)

·              2 private universities in W. Japan

·              218 students

·              18 to 21 years old

·              Varied in computer experience




·              Questionnaire on computer anxiety

      -CARS (7-item form) by Miller and Rainer (1995)

·              Formal essay-writing task

       -“Should English education begin in Japanese elementary school?”

       -5-paragraph format

       -Two drafts: rough and final (3 weeks)

       -Evaluated for grammar, vocabulary, originality, consistency, and essay structure (5pt each, 20

         pt total)




·              Question 1

      -Students’ levels of computer anxiety were related to choice of feedback methods

·              Question 2

      -Degree of improvement in the students’ essay were related to their choice of feedback methods


Classroom Implications


·              Provide multiple options for feedback methods

·              Teacher needs to address student who avoid all type of feedback methods




Qualitative Article Presentation

Student name here


Schlebusch, G., & Thobedi, M. (2004). Outcomes-based education in the English second        

         Language classroom in South Africa. The Qualitative Report, 9(1), 35-48.  Retrieved 01/17/08, from


Class Pr





EDCI 5544

Qualitative handout

Joe Herdler



Reeves, Jenelle R. (2006). Secondary teacher attitudes toward including English-language learners in mainstream classrooms. The Journal of Educational Research 99(31), 131-142.



The purpose of this study was to probe the attitudes of secondary content area teachers towards the inclusion of ESL learners into non-ESL classrooms, using qualitative methodology and a survey as the data gathering instrument.


Research Problems:

There were four basic areas of investigation in this study.


Research Design:

The research centered on an author constructed survey instrument that was designed to indirectly probe the attitudes of the respondents while minimizing the generation of rhetorical or ideological responses, while at the same time gaining insight to the perceptions and attitudes of the educators participating.




Classroom Implications:

Teachers in this study showed no bias against ELL’s, but did show a need for better understanding (empathy) of ELL’s and their special needs.







The Effect of Daily Phonemic Awareness Instruction on Kindergarten English Language Learners’

Reading Ability


 Research Proposal



 Kristin Beach



EDCI 5544

Dr. Wei

April 19, 2013



For the last several years, educators have been discussing the relationship between a child’s awareness of the sounds of language and his or her ability to read. Recent studies of reading acquisition have shown that the acquisition of phonemic awareness is highly predictive of success in learning to read and that phonemic awareness abilities learned in Kindergarten appear to be the best single predictor of successful reading acquisition. Some Kindergarten teachers overlook phonemic awareness instruction and confuse it with phonics instruction. By first grade, students who struggle with reading often struggle because of low phonemic awareness skills. The current study is designed to answer the following research question: How will daily phonemic awareness instruction affect the reading growth of Kindergarten English language learners? This study includes 20 Kindergarten ELLs from an urban school in a large city.  Students will participate in daily whole group and small group phonemic awareness activities.  Journeys, a research based reading curriculum, will be used to provide daily instruction. Students’ phonemic awareness growth and development will be observed and assessed by the researcher. This study proposes that Kindergarten students who receive daily phonemic instruction will be less likely to need intervention in first grade.


Research has helped educators to understand why the strong relationship of phonemic awareness and learning to read exists. One possible explanation is that phonemic awareness supports understanding of the alphabetic principle—an insight that is crucial in reading an alphabetic orthography. The logic of alphabetic print is apparent to learners if they know that speech is made up of a sequence of sounds (that is, if they are phonemically aware). In learning to read, they discover that it is those units of sound that are represented by the symbols on a page. Printed symbols may appear arbitrary to learners who lack phonemic awareness (Cunningham, Cunningham,  Hoffman,  & Yopp,  1998). Phonemic awareness, by definition, involves an understanding of the way that sounds function in words. Phonemes are the smallest units of sound in a language that hold meaning. The purpose of this study is to determine if daily phonemic awareness instruction will have a positive effect on Kindergarten Ells’ reading levels by the end of Kindergarten. Currently, approximately one – third of first grade students at the school where the study takes place, struggle with reading. This group of students receives weekly intervention from a certified teacher.  The researcher believes that if these students had received daily phonemic awareness instruction in Kindergarten, they would be less likely to need reading intervention in first grade. This study will likely show that because students are receiving daily phonemic awareness instruction, they will be reading grade level appropriate texts by the end of the Kindergarten year. Because Kindergarten, First, and Second grade are vital years for learning to reading, strong phonemic awareness instruction in Kindergarten will provide the students with the foundation they need to be successful readers. This study will focus only on Kindergarten students and will focus only on daily whole group and small group phonemic awareness activities and instruction.


This year-long case study will take place at an urban elementary school in a large city. 70% of the students in this elementary school are English language learners. 20 ELL students enrolled in Kindergarten for the 2013-2014 school year will participate in this study. The study will take place from the beginning of Kindergarten in August 2013 to the end of Kindergarten in May 2014.  A reading curriculum by Houghton Mifflin Harcourt, called Journeys will be used to teach whole group and small group phonemic awareness activities. Journeys is a comprehensive K–8 reading/language arts program. Journeys features an explicit, systematic instructional design that based on the research and best practices of its program authors, such as Irene Fountas, Shane Templeton, Jack Pikulski, David Chard, J. David Cooper and Sheila Valencia, uses a wide range of print and digital texts and activities to support and engage students at school, and at home.

The researcher will be a participant observer for one hour and twenty minutes every day in the Kindergarten classroom where the study is taking place. The researcher will observe daily whole group phonemic awareness instruction and activities for 20 minutes each day, and will observe four, fifteen minute small reading groups where the teacher will also provide phonemic awareness instruction along with reading instruction.

In order to measure phonemic awareness growth over the course of this study, the researcher will use three forms of data collection: Fountas/Pinnell Benchmark Assessments (BAS) collected every six weeks, STAR Literacy Assessment collected four times during the Kindergarten year, and Note-taking, collected on a ongoing basis.

            The Fountas & Pinnell Benchmark Assessment System is a formative reading assessment comprising high-quality original titles, or little books, divided evenly between fiction and nonfiction. The assessment measures phonemic awareness, decoding, fluency, vocabulary, and comprehension skills for students in Kindergarten through 8th grade (Pinnell & Fountas, 2007). To determine whether the BAS is a valid assessment of student’s reading level, a formative evaluation was conducted with a broad spectrum of classroom readers in different regions across the United States. This formative evaluation generated ongoing and immediate feedback from field test examiners and readers that was used during the continued development of the program to ensure that it met standards of reliability and credibility. After two and a half years of editorial development, field-testing, and independent data analysis, the BAS texts were demonstrated to be both reliable and valid measures for assessing students’ reading levels (Pinnell & Fountas, 2007).

The Fountas/Pinnell Benchmark Assessments will require the researcher to do a lot of observations of the students while the students are reading and writing. The researcher will get to know each student in a one-to-one setting. Having time with each student to gather data about their reading and writing about reading is valuable (Pinnell & Founts, 2007). BAS will be collected every six weeks. During a BAS, the researcher will listen to a student read, complete a running record (Appendix A), and then create a summary of the students’ reading. The summary includes: Accuracy, Self-Correction, Fluency, and Comprehension. The BAS provides the students with an independent reading level, and it provides the researcher or classroom teacher with an instructional reading level.

            STAR Early Literacy Assessments will also be used by the researcher to measure phonemic awareness growth. STAR was created by Renaissance Learning and is used for screening and progress monitoring. STAR assessments are reliable and valid computer-adaptive assessments of reading and comprehension for grades 1–12. STAR provides nationally norm-referenced reading scores and criterion-referenced scores (Renaissance Learning, 2009). STAR Early Literacy’s reliability was estimated using three different methods (split-half, generic, and test-retest) to determine the overall precision of its test scores. The analysis was based on test results from more than 9,000 students. The reliability estimates were very high, comparing favorably with reliability estimates typical of other published early literacy tests (Renaissance Learning, 2009). I will collect data from the STAR assessment four times during the year. The STAR assessment is broken into 10 reading categories, including alphabetic principle, concept of word, phonics, visual discrimination, structural analysis, vocabulary, and phonemic awareness (Appendix B), which is the focus of this research. Overall, the test is very user friendly and it is used by classroom teachers to individualize instruction, guide whole class instruction, and provide specific details in the areas where students need improvement and where they have grown.

            The researcher will also use note taking as a part of determining student’s phonemic awareness growth. Observations of students will occur during whole group reading time, and during small reading groups. The whole group reading time is when the greatest amount of phonemic awareness instruction will take place. A large portion of the researcher’s note taking during the whole group reading lesson will be Anecdotal Records. The researcher will be looking for student understanding and participation during phonemic awareness instruction. During small reading groups, the researcher will also be taking Anecdotal Records, again looking for student understanding and participation in phonemic awareness instruction. The researcher will use a composition style notebook that will be divided by tabs, each labeled with a day of the week. Each day of the week will be divided into four smaller tabs, each labeled with one student’s name. This system will allow the researcher to record anecdotal records for four students every day, allowing her to keep records for the entire class every week.


If carried out, this research could greatly influence the teaching practices of Kindergarten English Language Learners. If the researcher’s hypothesis is correct, Kindergarten ELLs will develop strong phonemic awareness through daily phonemic awareness instruction. This will also help the Kindergartners to grow in specific areas of reading like phonics, vocabulary, and comprehension, which will in turn affect their overall literacy growth.


Appendix A: Sample Running Record

Macintosh HD:Users:SVN:Desktop:runrec-1.gif


Appendix B: Sample STAR Early Literacy Assessment question




Audio:  "Look at the pictures: hook, brick, road. Pick the picture whose ending sound is different from the others."



Macintosh HD:Users:SVN:Desktop:PA- STAR.jpeg





Cunningham, J., Cunningham, P., Hoffman, J., & Yopp, H. (1998). Phonemic awareness and

            the teaching of reading. Retrieved from                   _phonemic.sflb.ashx

Pinnell, G.S., & Fountas, I.C. (2007). Fountas and Pinnell Benchmark Assessment. Retrieved       from

Renaissance Learning (2009). The Foundation of the STAR Assessments. Retrieved from               



Pragmatics Literature Review

Lauri Cheng

Instructor: Michael Wei

EDCI 5544

University of Missouri Kansas City


            Language is used primarily as a communicative tool and can be used to develop social relationships between friends, coworkers, authority figures, etc. While learning a second language almost always encompasses a thorough knowledge of the form of language and its features, the pragmatics aspect of language is not as emphasized and can result in misunderstandings and misjudgments. It is extremely important to teach students these features in pragmatics such as politeness, compliments, requests, apologies, and many more. Without the understanding of these social features of language, students may encounter negative relationships because of an unintentional manner of speaking. The form and features of a language can be perfected but without the understanding of the social and cultural norms of a language, second language learners can still suffer from miscommunication.

Cultural Implications

            Most languages include words that may have similar meanings but one may be considered formal or polite while the other is regarded as informal. The decisions speakers must make regarding which word to choose is heavily dependent on their social understanding of how they want to interact and whom they are interacting with. “Politeness and impoliteness as social practices are embedded in daily interactions, and they rely on interactants' assessments of norms of appropriateness that are historically constructed by each individual.”(Iwasaki, 2011, p. 68). This should be easily learned in a speaker’s first language (L1) because of the culturally embedded implications that are connected to learning to talk. However, when a speaker is learning a second language (L2), there are many cultural differences that can affect the decisions he or she makes when choosing which words to use for daily interactions. Many students who study abroad face an overwhelming amount of decision making in the pragmatic aspect of speech because they are flooded with daily interactions with different kinds of people.

In Iwasaki’s study on university-level Japanese L2 learners studying abroad in Japan, it is mentioned how different American social norms and Japanese social norms are. Japanese culture tends to be much more oriented around formalities and politeness, especially with an authoritative figure as there are very important hierarchical relationships established. Americans tend to associate informality with friendliness. This confusion can either cause Americans studying abroad in Japan to be overly cautious and never use informal speech or use informal speech at inappropriate times thinking they are being polite. “L2 users may not necessarily act "appropriately" even if they develop judgments of appropriateness as they socialize into a new language/culture; they also make choices to conform or not to conform to what they believe is appropriate.” (Iwasaki, 2011, p. 68). It is especially important for those students who are studying abroad or moving to a new country permanently explicitly learn these pragmatic features by also learning appropriateness and culture specific to the context they will belong to. Most of the students explained in Iwasaki’s study that they learned pragmatic features of their L2 by either socialization or being explicitly taught by a teacher. Socialization seemed to be the driving force for the most effective knowledge of pragmatics. However, each student had very different experiences in socialization and those students who were less involved in a variety of contexts had more trouble learning which features of the language to use regarding informal/formal speech. While there are certainly opportunities outside of school to practice English for students who are living abroad and studying English as their L2, there are many students who do not feel confident enough in their speaking ability to be more involved in activities and social events outside of the classroom. For this reason, teachers may need to be more proactive in explicitly teaching some of these pragmatic rules.


If a nonnative speaker is having difficulty choosing the appropriate words in an exchange with a native speaker, the native speaker will most likely be more accepting of any mistakes or at least be aware of potential miscommunication. However, relationships between native an nonnative speakers can really assist L2 speakers trying to improve on their communicative competence and utilize opportunities for practice. In order to do this, a basic understanding of how to regulate relationships within the specific culture or context they are is crucial. Humor is one of the means in which to achieve this. “Whereas it is certainly possible to make friends without a keen grasp of L2 humor, its cultural specificity is an additional attraction for many learners, in that understanding humor is often thought to be key to a deeper understanding of a culture.” (Bell, 2011, p. 137). Humor can be utilized to improve relationships and assert power in a way that is more indirect and polite. “One can use humor to order steaks done medium-rare, to demand silence, or to advise a friend to avoid a particular person.” (Bell, 2011, p. 145).

This subject area has received intrigue from ESL teachers in trying to plan lessons that are amusing and fun. However, humor is so dependent on culture and context that it may be very difficult for L2 students to understand. This is another pragmatic feature of language that could use more attention. The classroom is considered a safe place for ESL students to practice their language and experiment with new language that they may feel uncomfortable or embarrassed to use in a context outside of school. Humor is certainly one of those features that all people may feel uncomfortable with practicing but especially for nonnatives. Language teachers should not be aiming to turn all of their students into comedians but there is a great value in understanding and producing humor. “Rather than presenting the learners with specific formulas for appropriacy, language is taught as a set of choices, and learners are allowed to choose those that allow them to feel most at ease in the L2.” (Bell, 2011, p. 150).

Part of learning a second language rather than a foreign language is the self-discovery of a new identity within a new environment and context. ESL students living abroad may face some isolation and culture shock, which could also hinder their language ability. Teachers can only do so much in the class as far as teaching communicative skills for socializing, as it is still up to the students to utilize the knowledge they receive in the classroom and apply it to their daily interactions. “Learning a second language and culture can be more deeply engaging when the participants build their own social value within communities of practice.” (Andrew, 2009, p. 325). Andrew’s study describes international students participating in community placement programs through their university in New Zealand. The students were placed in volunteer jobs, clubs, intramural sports, etc. to force these students to assume a role within the New Zealand community. The students were then asked to reflect on their experiences through this program and they overwhelmingly agreed on how much it helped their language abilities. This kind of encouragement from the school and ESL program could be extremely beneficial to any students studying abroad.


Writing skills are often clear indications of some cultural differences in word use. Languages will inherently encompass specific words that are reflective of the culture they are used in. For example, English has many different words that can all be used to make a request feel much more indirect. In comparison to some other languages, polite English tends to be indirect and non-impeding. Callahan’s research article explores the differences between the speech act of requesting in Spanish and English. “With respect to this speech act, it has been shown that Spanish tends to favor more directness than does English.” (Callahan, 2011, p. 171). Two emails are compared to examine the pragmatic differences between the choices of words used to request a letter of recommendation. One email was written in English by a native-English speaker and the other was written in Spanish by a native-Spanish speaker. They were sent to the same professor who is bilingual in English and Spanish. In comparing the emails, the English request was much more informal and repeatedly mentioned any ways he could make the request less burdensome for the professor. The Spanish email was more formal and much more direct but still very polite. It seems that the Spanish speaker understood politeness to be an issue of formality while the English speaker understood imposition and being burdensome as impolite. Callahan references these differences as positive and negative face. “In this framework, positive face refers to the desire to be liked and appreciated, while negative face refers to the desire to be unimpeded.” (Callahan, 2011, p. 171). These cultural differences can be damaging if not understood properly by both the sender and receiver. Some receivers insensitive to these pragmatic differences in language may misinterpret an email or speech act as rude and impolite and contribute these negatives as a personality flaw or defect in the sender.


Native speakers may also need some education in understanding the differences in pragmatic features of different languages. Any professional in the education field, especially ESL, not only needs to focus their attention on the use of language performed by nonnative speakers but also how their choice in language can affect the students. “Linguistic politeness refers to the language individuals use to meet the face (i.e., the self-image) needs of their interlocutors.” (Mackiewicz & Thompson, 2013, p. 39). Motivation is the most fragile aspect of education and teachers may have more power in this aspect than they think. In Mackiewicz and Thompson’s article, the politeness theory is explored with writing center tutors and how the comments and feedback provided by these tutors can affect the motivation of the students needing assistance. “Positive politeness can be particularly difficult for speakers in cross-cultural interactions to use effectively and to comprehend easily.” (Mackiewicz & Thompson, 2013, p. 42). When participating in an exchange with a nonnative-speaker, native-speakers need to be mindful of how their choice in words could come off confusing or even impolite and therefore putting off the person being spoken to and diminishing the confidence and motivation in language learning.


While it is clear that explicit instruction in the classroom can benefit students in learning pragmatic features of language, there are many ideas on what exactly teachers should be doing to promote this learning. Speakers need to understand the implied meaning behind a speech act and do so in a way that can become automatic to them. This cultural awareness is what creates a communicative competency. Teachers can use narratives and introduce various ways of saying one thing with another implied meaning. While real world experience will always be invaluable because learners are forced to apply their knowledge in real time, classrooms can certainly create an atmosphere or environment where that kind of practice and automaticity is encouraged. “Automaticity develops through consistent associative practices between input and learners’ response, as shown in some L2 studies.” (Taguchi, 2007, p. 318).

Taguchi’s study explores what factors affect pragmatic comprehension in university level ESL students. She examines both native and nonnative speakers and found that nonnative speakers certainly had the most difficulty because of the cultural differences but they improved over time with more proficiency gains and more input. She also found that the speed variable tested was not necessarily due to knowledge, but more so of processing abilities. These are two separate skill sets that both should be worked on in order to gain the most proficiency. The processing abilities have more to do with the automaticity and “translation” in determining what the correct implied meaning was. There are also some more conventional pragmatic speech acts that would be much easier for lower proficiency students to comprehend. Refusals typically have some kind of explanation for the refusal, which is easier for students to pick up and understand the implied meaning. After understanding some of these more conventional speech acts, students can expand to other pragmatic speech to decipher implied meanings.

Conversational analysis (CA) is another key indicator of second language acquisition (SLA). “CA for SLA purports to seek relevance of learning through actions of parties in each context of use because the learning processes are constructed in and through the talk of participants.” (Lee & Hellermann, 2014, p. 766). For students to really demonstrate their knowledge and learning taken place, they must be able to understand and participate naturally in a conversation. Too much emphasis on form can create a large vocabulary and a comprehension of grammatical rules but if that cannot be translated into communicative competency, language acquisition has not yet taken place. “Learners often have difficulty negotiating culturally appropriate content in particular speech acts, such as requests in specific contexts, even when they are able to identify the semantic features and formula required.” (Riddiford & Joe, 2010, p.195). There are many social cues that are culturally embedded in conversation that may not be as obvious to a nonnative speaker. During a verbal exchange, a relationship is building between the two interlocutors and there needs to be a mutual understanding of the conversation at play. “When producing the appropriate next turn(s) at talk, the speakers display some level of knowledge regarding the import, relevance, concern, problems, or even ignorance of the prior turn.” (Lee & Hellermann, 2014, p. 767). Even if a nonnative speaker is not familiar with the content being spoken, he or she can still continue in the exchange by participating in the pragmatics aspect of speech and ask questions to clarify meaning. This skill almost is necessary to learn at the beginning stages of language learning so it can be used as a foundation to promote more learning through conversation.

Workplace Pragmatics

Most students that choose to study a second language and especially study abroad will be applying this knowledge to their future career. For those students who may end up working in an environment where their L2 is the common language spoken, they will be more successful if they understand the appropriate use of language in a workplace setting. Some universities and programs are doing intensive courses specifically to inform L2 students on appropriate workplace language. In Riddiford and Joe’s study, they utilize role-playing and discourse completion tasks (DCTs) for workplace pragmatics. The DCTs presented situations where judgments regarding power, social distance, and degree of imposition were all up to the student to decipher and then perform. “Discourse completion tasks have been criticized for eliciting only what participants think they would say in a situation (i.e., pragmatic knowledge) rather than what they would actually say in a naturally occurring situation.” (Riddiford & Joe, 2010, p.198). The role-playing activities help balance this issue because role-playing forces students to interact in a scenario more similar to what they would experience in the real world. The students enrolled in this program learned some very valuable pragmatic features to their L2 that can help them interact with their colleagues at work much easier.

Many of the students mentioned how much more aware they were of imposition after spending time within the workplace context and cultural environment. Even some very simple lessons in softening imposition helped them to strengthen relationships within the workplace. One student mentioned how the words, “I was wondering if,” could completely change the tone and intent of her request. These adjustments in speech are very easy to make and make a drastic difference in speech performance.

It also may be helpful to compare the use of language by native speakers versus nonnative speakers in the workplace. It is difficult to determine what exactly the best approach to a careful situation at work is like when making a request or refusal. Role-playing scenarios and observations can both aid in understanding what kind of cultural implications are established at work and how to react. Even some native speakers are not as keenly aware of this issue and may also struggle with work relationships especially within hierarchical relationships. However, nonnative speakers are at a disadvantage already because some of their language inefficiencies can be misinterpreted. “Though miscommunication arising from difficulty with vocabulary or grammar can be clearly identified as such, pragmatic errors are less visible and more likely to be attributed to the personality of the speaker than to the speaker’s imperfect grasp of pragmatic norms.” (Wigglesworth, Yates, Flowerdew, & Levis, 2007, p. 792). In Wigglesworth’s study of workplace pragmatics in Australia, native-English speakers tended to speak to their superiors in a much more informal way as to establish a more equal relationship even though it would technically be a hierarchical relationship. Disarmers were used more frequently to prepare the conversation before a request was made. The nonnative-English speakers tended to be much more overt in their addressing of an authoritative figure. They also were much more direct and did not seem to be as aware of the imposition implied. This example speaks directly to the culture of Australian workplaces as there are not apparent hierarchies within the employees and the speech used in this setting reflects that. However, politeness is still important and because decisions can be shared between both interlocutors, imposition is always taken into consideration with every request.

Teaching Roles

The issue of teaching pragmatics in an ESL classroom is tricky. Pragmatic functions of language are typically thought to be learned outside of the classroom when interacting with people in real world scenarios. The reasoning for this is due to the social identity and social relationships that directly affect the pragmatic use of language. In a classroom, the role of teacher and student is constant and does not provide the student many opportunities to practice their pragmatic speech. Activities like role-playing and using textbooks with explicit pragmatic instruction because it may rely on an idealized scenario that may not turn out that way in real life. Vasquez and Sharpless did a survey study to explore how pragmatics was being taught in TESOL Master’s programs. Out of 94 institutions surveyed, an overwhelming number of responses included how important pragmatics is becoming in the field and how more emphasis needs to be placed on this area for aspiring teachers. The institutions that did have pragmatics courses mentioned some differences between applied and theoretical pragmatics with theoretical pragmatics being much less popular.

However, many institutions do not have clear emphasis on teaching pragmatics to TESOL students. There seems to be extreme variability in how TESOL programs address pragmatics if they do so at all. “We can say with considerably more certainty that less than one quarter of the programs surveyed currently offer a dedicated pragmatics course in their curriculum.” (Vasquez & Sharpless, 2009, p. 23). Many of the responses from this survey mentioned how appreciate the international TESOL Master’s students are to have more explicit information in pragmatics as they may have some pragmatic incompetency themselves. It is a difficult subject area to teach effectively and there is still much more work to be done to find the best method of teaching pragmatics but it is certainly important. “Pragmatic competence does not develop alongside grammatical competence and, in fact, is believed to take longer to develop.” (Vasquez & Sharpless, 2009, p. 6). Students need both sets of language proficiency in comprehension and then implied social meaning. This combination is the best way to achieve true competence in language.


Pragmatic competence is a very necessary part of language proficiency and every native speaker embodies this aspect to some degree. There are many studies involving interlanguage and how the L1 can affect the L2 in form. However, this same theory could be applied towards pragmatics in that a speaker’s social identity in their native culture and native language will have an effect on their L2 and potentially new culture if that speaker lives abroad. While it may never be possible for a nonnative speaker to truly understand the pragmatics part of their L2 and all of the culturally embedded features that go with it, a type of intercultural understanding can still be improved to make sure that speaker can succeed in an L2 environment. It is also the role of the native speakers and other interlocutors to be sensitive to these pragmatic deficiencies in second language learners and not automatically assign them to be character flaws. Language teachers should be especially patient to these difficulties and continue to work on methods to help achieve pragmatic competence. With the success of pragmatic abilities, these students will be able to create deeper relationships and be prepared for more opportunities in their intercultural lives. Ultimately, this ability strengthens globalized relationships and helps diverse people understand and learn from each other.


Andrew, M. (2009). Deepened mirrors of cultural learning: Expressing identity through E-writing. CALICO Journal, 26(2), 324-336. Retrieved from


Bell, N. (2011). Humor scholarship and TESOL: Applying findings and establishing a research agenda. TESOL Quarterly, 45(1), 134-159. Retrieved from


Callahan, L. (2011). Asking for a letter of recommendation in Spanish and English: A pilot study of face strategies. Hispania, 94(1), 171-183. Retrieved from


Iwasaki, N. (2011). Learning L2 Japanese “politeness” and “impoliteness”: Young American men’s dilemmas during study abroad. Japanese Language and Literature, 45(1), 67-106. Retrieved from

Lee, Y., & Hellermann, J. (2014). Tracing developmental changes through conversation analysis: Cross-sectional and longitudinal analysis. TESOL Quarterly, 48(4), 763-788. Retrieved from

Mackiewicz, J., & Thompson, I. (2013). Motivational scaffolding, politeness, and writing center tutoring. The Writing Center Journal, 33(1), 38-73. Retrieved from

Riddiford, N., & Joe, A. (2010). Tracking the development of sociopragmatic skills. TESOL Quarterly, 44(1), 195-205. Retrieved from

Taguchi, N. (2007). Development of speed and accuracy in pragmatic comprehension in English as a foreign language. TESOL Quarterly, 41(2), 313-338. Retrieved from

Vasquez, C., & Sharpless, D. (2009). The role of pragmatics in the master’s TESOL curriculum: Findings from a nationwide survey. TESOL Quarterly, 43(1), 5-28. Retrieved from

Wigglesworth, G., Yates, L., Flowerdew, J., & Levis, J. (2007). Mitigating difficult requests in the workplace: What learners and teachers need to know. TESOL Quarterly, 41(4), 791-803. Retrieved from



Copyright © Dr. Michael Wei