Essay Instructions: My partner and I have been working on this paper together (To exam the relationship between the frequency of corporal punishement and the students' grade level, gender, and rural versus urban school), however, the instructor requires us to submit a different version of the writing. Hence, I would like your help to write another versin of the same paper. The soft copy of this paper will also be email to you. Thanks for your help.
Here is the source:
The purpose of the study is to examine the relationship between the frequency of corporal punishment and the students? grade level, gender, and rural versus urban schools. There is literature that examines these relationships in other parts of the world, but nothing has been reported for Taiwan. The outcomes of this analysis could become input to policies and advocacy programs that decrease corporal punishment of students by teachers in Taiwan.
Data required for this study has been gathered by different organizations in Taiwan over the years, but very little has been done to further analyze the data in order to lead to a more meaningful understanding on corporal punishment of students. This study is based on secondary data analysis of available data from the Humanistic Education Foundation in Taipei, Taiwan. Similar to other data collection methods, secondary data analysis has certain inherent limitations. Perhaps the most serious issue in using secondary data is that the data only approximate the kinds of data that the investigator would like to employ for testing the hypothesis. There is often an inevitable gap between the primary data the investigator would like to personally collect with specific research purposes in mind and the data already collected by others. Differences are likely to appear in sample size and design, questions wording and sequence, the interview schedule and method, and the overall structure of the experiment. Secondary analysis may cause further problems if the researcher has insufficient information about how the data were collected. This information is important for determining potential sources of bias, errors, or other problems with internal or external validity. Based on all these aforementioned reasons, the use of secondary data means having to analyze and scrutinize the available data in ways that are often above and beyond that of directly collected data.
The data used in this study was gathered by the Humanistic Education Foundation in Taipei, Taiwan in 2004. The Humanistic Education Foundation is one of the grass-root advocacy groups in Taiwan. They devote themselves to many kinds of educational issues, especially school violence and teachers? corporal punishment of students. They have conducted a series of surveys titled ?Current Teachers? Corporal Punishment in Elementary and Junior High School in Taiwan? since 1999. The data for this study is based on the survey conducted in 2004. The reasons not to choose data from 1999 to 2003 are as follows. The questionnaire used in 1999 and 2000 was to pilot the study and that questionnaire has never been validated. The questionnaire conducted in 2001 and 2002 was improved. However, the questionnaire was based on a qualitative research design, and as a result, there was very little quantitative information from the results.
For 2003, the foundation revised several questions in their questionnaire with the intent to create a more reliable and valid questionnaire. However, the 2003 data was collected only in Taipei and consisted of many missing value. In fact, the same issues also existed in data conducted from 1999 to 2003.
In an effort to ensure better data quality and broader representation by the data, the Foundation decided to make a significant investment in the 2004 survey. They improved the survey and overall approach to address many of the issues from the previous versions. The data for 2004 covered a much broader geography, and specific effort was made to minimize missing values and increase response rate based on a validated and more reliable questionnaire. For all the aforementioned reasons, the data from 2004 was chosen for this research study.
The study employs a national sample of 1311 elementary and junior high school students in different areas in Taiwan. The sampling for the study is as follows:
(1) students from elementary school
(2) students from junior high school
(3) students from Northern, Central, and Southern regions in Taiwan
(4) schools from both urban and rural areas
(5) grade level from 1 to 9
A stratified random sampling method was used to draw a representative sample of students. First, investigators followed the Taiwan National Development Plan from Taiwan Central Government and divided the country into three regions: North, Central and South. Each region has slightly differences in culture, political and economic status, and natural environment. Secondly, in each region, one rural and one urban area were selected by random. Hence, this results in a three by two stratification with urban areas in North, Central and South, and rural areas in North, Central, and South. Thirdly, a proportionally stratified random sample was employed to represent elementary and junior high schools. As a result, 62 junior high schools and 162 elementary schools were selected to add to the distribution in different strata. In every school, disproportionate stratified sampling was employed, with 6 students selected in each school in the sample frame. Due to 3 elementary schools dropping out of the project, the total number of elementary schools was 159. Finally, 1344 students were selected with a response rate of 97.5%.
Strength and Limitation of Sampling Method
Researchers use stratified sampling primarily to ensure that different groups of a population are represented adequately in the sample so as to increase the level of accuracy. Furthermore, all other factors being equal, stratified sampling considerably reduces the cost of performing the research. The underlying idea in stratified sampling is to use available information on the population to divide it into groups such that the elements within each group are more akin to the elements in the population as a whole. The results in creating a set of homogeneous samples based on the variables of interest. If a series of homogeneous groups can be sampled in such a way that when the samples are combined, they constitute a sample of a larger and more heterogeneous population, the accuracy of population estimates will be increased. The stratification procedure does not violate the principle of random selection because a probability sample is subsequently drawn within each stratum or specific group.
The fundamental principle applied when dividing a sample into homogeneous strata is that the criteria on which the division is based be related to the variable the researcher is studying. Another important consideration is that when using these criteria, the ensuing number of subsamples does not, taken together, increase the total size of the sample beyond what would be required by a simple random sample. However, if all these criteria were in fact used, the value of the stratified sample would diminish because the number of sub samples required would become enormous.
Sampling from the different strata can be either proportional or disproportional. If the number of sampling units taken from each stratum is of the same proportion within the total sample as the proportion of the stratum within the total population--- a uniform sampling fraction (n/N)--- we obtain a proportionate stratified sample. However, if proportion of the sampling units from each stratum included in the total sample is either above or below the proportion of the total numbers ( N ) in each stratum within the population--- that is, if the sampling fraction vary--- the sample is a disproportionate stratified sample. In other words, when the total number of people characterized by each variable (or stratum) fluctuates within the population, we need to choose the size of each sample for each stratum according to our research requirements. This choice is influenced by the likelihood of obtaining a sufficient number of sampling units from each stratum within the final sample. As a rule, disproportionate stratified samples are used either to compare two or more particular strata or to analyze one stratum intensively (Creswell, 1994). When researchers use a disproportionate stratified sample, we have to weight the estimates of the population?s parameters by the number of units belonging to each stratum. In this sample, weighting strategies seems not to have been done in the original data.
Once researchers have defined the population of interest, they draw a sample that represents that population adequately. The actual procedures involved selecting a sample from a sampling frame comprised of a complete listing of sampling units. Ideally, the sampling frame should include all the sampling units in the given population. In practice, such information is rarely available. Hence, researchers usually have to use substitute lists that contain the same information but may not be comprehensive. There is a high degree of correspondence between a sampling frame and sampling population. The accuracy of the sample depends, first and foremost, on the sampling frame, because every aspect of the sample design--- the population covered, the stages of sampling, and the actual selection process--- is influenced by it. Before selecting a sample, the researcher has to evaluate the sampling frame for potential problems. According to Kish, when inspecting the sampling frame, propose, the typical problems found in sampling frames are incomplete frame, clusters of elements, and blank foreign elements.
The problem of incomplete sampling frames arises when sampling units found in the population are missing from the list. When the sampling frame is incomplete, one option that might be available is the use of supplemental list. In this data, schools are sampled from the complete sampling frames, incomplete sampling frames does not exist in this sample. The second potential type of problem with sampling frame is clusters of elements. This problem occurs when the sampling units are listed in group rather individually. In this sample, this problem does not exist. The third potential type of problem is blank foreign, which is quite common in studies. It occurs when some of the sampling units in the sampling frame are not part of the research population, such as the case where the research population is defined as eligible voters whereas the sampling frame includes individuals who are too young to vote. This problem often occurs when outdated lists are used as the sampling frame. In the current survey, this problem of blank foreign elements does not applied in this data.
Compare to a clinical, convenience or purposive sample, this sample in this study is more generalizable by capturing a wide spectrum of data on teachers? corporal punishment of students in schools and students? responses with a range of victimization experience. It also reaches students who have not sought outside assistance for various reasons. Most importantly, it reaches many schools in rural areas, where there are less educational resource, compared to urban area. Northern Taiwan is more urbanized and rich in resources. So far, many studies were conducted in the North but much fewer in Central or Southern regions. This data can help us understand the current teachers? corporal punishment issues in these areas with more vulnerable educational systems.
Overall, this data is more representative nationally based on a stratified sampling method. The sampling strategy also reaches the students from rural or other areas in Taiwan that are often ignored by many researchers. Most importantly, following this sampling method, we can more accurately estimate the prevalence of teacher employing corporal punishment on students. Victimization rate is more representative, and there are no issues with sampling frames, which strengthens the confidence of this data. Given that a high response rate (97%) is achieved in this data, as a result, this data can be perceived as very representative of the overall population. However, since only elementary and junior high schools in Taiwan were sampled, the results of this study only can be generalized to the elementary and junior high schools in Taiwan. Senior high schools would not be part of the generalization, and the results may not be generalized to other countries in the world.
A self-created version of Teachers? Aggressive Punishment toward Students (Human Education Foundation, 2004) was administered to measure the variables of interested in this study.
This instrument was composed of 42 items. The content of this instrument is followings:
Basic Demographic Information. Four items provided demographic information, such as the location where students were located (North, Central, South; Urban and Rural), students? gender, and students? school and grade level.
Punishment Types in schools. This variable was based on situations or events that students witnessed at their schools. As a first part of the survey, participants were asked, ?What types or ways of teachers? punishment have you seen so far since last semester?? Students will answer 11 questions that focus on different types of punishment. These questions, for example, include ?I saw teachers asking students to strike other students as punishment?, ?I saw teachers directly hitting students?, ?I saw teachers depriving students? basic needs, such as eating, drinking, resting?. These questions were answered by 1=no and 2=yes. The scale had an alpha reliability of .78.
The second part is the variable examining what students had seen as the implements that teachers used when they punish students. The question is ?What tools have you seen since last semester that teachers used when they punish students?? There are 7 items and include, for example, ?Did you see that teachers use their hand to aggressively punish students? ?, ?Did you see teachers using ruler to punish students??, and ?Did you see teachers using rod to punish students?. Response were given 1=no, and 2 = yes. The scale had alpha reliability of .77.
The third part of this scale is the open question. This open question is to discover any out of the ordinary punishment tools or methods employed by teachers in schools. Students are asked, ?What are the most strange or weird implements and ways that teachers employ when they punish students?? Students are asked to describe what they saw in a brief description.
Prevalence of Punishment. This subscale consists of three questions that refer to the students? personal experience on teachers? administration of corporal punishments. First item is ?What proportion do you think among your teachers physically punished students since last semester?? The response is given in a 4-point Likert-type by 1= all of them, 2= over a half, 3= below a half, and 4= none. The second question is ?How many times have you been physically punished by teachers since last semester?? The response were given in a 4-point Likert-type with 1= never, 2= 1 to 5 times, 3= 6 to 10 times, and 4= over 10 times. The third question is ?Have your teachers asked your parents to sign the contract to permit physical punishment?? The response is given 1= yes, 2= no, and 3= don?t know.
The reason of punishment. The fourth subscale consists of 6 items that refers to the reasons why teachers in school punished students. First five items are ? I was punished by teachers due to poor academic performance?, ?I was punished by teachers due to behavior problems?, ? I was punished by teachers due to bad attitude?, ?I was punished by teachers due to fighting with other students?, and ? I was punished by teachers for being associated with other students who have committed an offence?. The sixth question allows an open answer so student could describe other reasons why they were punished. Responses of the first five items were given on 1= no and 2 =yes. These five items had an alpha reliability coefficient of .63.
Impact of Punishment. This subscale aims to examine the impact of teachers? punishment on students. Two dimensions were inspected and the first dimension is to examine the impact of direct punishment. Students were asked what about their thoughts when punished by teachers, and the items included ?I feel scared, shamed, and quilt?, ? I feel angry and want to retaliate?, ? I feel I deserved it?, ? I think it is cruel and unreasonable treatment?, ? I feel I am the target of the teachers? venting their frustration?, and ? I don?t know?. The last question is open to allow students to describe their feeling or thoughts. The six items were given responses of 1=no, and 2= yes. The first six items had an alpha reliability coefficient of .82.
The second dimension is to examine the impact of witnessing teachers? punishment of other students. Students were asked ?When you see other students punished by teachers, what do you think?? Answers included ?I feel sympathy toward those students?, ? I feel lucky that I am not that person?, ?I think the teachers are right to punish these students?, ?I feel injustice and angry at teachers? punishment of those students?, and ? I don?t know my feelings?. The last question is open to allow students to freely describe their feelings and thoughts. The first 5 items are given the response 1= no, and 2= yes. The first 5 items has an alpha reliability coefficient of .79.
Potential Transmission of Teachers? Punishment. This variable is to examine whether there is potential transmission in students? points of view towards teachers? punishment. This question asked students ?Whether you will employ corporal punishment toward students if you become a teacher in the future? The response was given 1= yes, 2= no, 3= it depends, and 4= unsure.
Students? self-report of academic performance. This variable is to examine students? academic performance by students? self-reporting. Students are asked ?What is your academic performance? ?. The response is given 1= good, 2=fair, and 3= bad.
Perception of Teachers? Punishment as Illegal. This variable is to investigate whether the students knew that teachers? punishment was legal or not. Students are asked ?Do you know that it is illegal for teachers to physically punish students?? The responses are 1= yes, I know, and 2= no, I do not know.
All data cleansing and analysis were conducted using SPSS for windows (Version 13.0). The initial phase of the analysis involved calculating the completion (response) rate and examining various characteristics of the respondents, such as basic socio-demographic characteristics and background. The completion rate was calculated by dividing the number of respondents who participate in the study by those prospective respondents who had been contacted and met the sampling eligibility criteria. The respondents? characteristics were examined using descriptive statistics, such as frequencies, means, standard deviation, mode, and range.
The analytical methods employed in testing the proposed hypotheses consisted primarily of the following procedures: 1) a descriptive analysis of the respondent?s experiences with teachers? punishment; (2) estimating the prevalence of teachers? punishment, as well as a bivariate analysis of the prevalence by demographic and students? background; (3) bivariate analysis of experience of teachers? aggressive punishment.
The descriptive analysis of experience of teachers? punishment:
Characteristics of violence experienced by the respondent were assessed by using descriptive statistics. These analyses included the specific types of punishment employed by teachers, witnessing of punishments, students? academic performance, and students? behavioral and emotional response. Content analysis was conducted on the responses to open-ended questions, such as the respondents? emotional and behavioral reactions to the teachers? punishment and the perceived influence of cultural values on teachers? response to teachers? aggressive punishment.
Estimating the prevalence of teachers? aggressive punishment:
Basic statistical tools will be used to estimate teachers? punishment. Measurements of association were used to examine the association between the rates of teachers? punishment and students? basic demographics.
In order to test whether students? experiences of teachers? punishment vary by gender, the location of the schools, and students? grade levels, Chi-square, Kolmogorov-Smirnov test, and measurement of association was conducted (Creswell, 1994).
Creswell JW, (1994). Research design: qualitative and quantitative approaches. Thousand Oaks, California, Sage Publications.
Kish, L. (1965). Survey Sampling. New York: John Wiley and Sons, Inc.
There are faxes for this order.
Excerpt From Essay:
Essay Instructions: question first and then continue to answer. Do Not Use Outside Sources.
Berliner readings refer to educational research, however, at the same time that educational researchers themselves are expanding their repertoire; such as the federal government narrowing their focus on ‘scientific research.’
1.What do you think are some likely outcomes of this conflict?
Educational Research: The Hardest Science of All: by David C. Berliner
Under the stewardship of the Department of Education, recent acts of Congress confuse the methods of science with the process of science, possibly doing great harm to scholarship in education. An otherwise exemplary National Research Council report to help clarify the nature of educational science fails to emphasize the complexity of scientific work in education due to the power of contexts, the ubiquity of interactions, and the problem of decade by findings interactions. Discussion of these issues leads to the conclusion that educational science is unusually hard to do and that the government may not be serious about wanting evidence-based practices in education.
Scientific Culture and Educational Research” (this issue), as well as the National Research Council (NRC) report from which it draws, are important documents in the history of educational research. I commend the authors and panelists who shaped these reports, and I support their recommendations. But it is not clear to me that science means the same thing to all of us who pay it homage, nor do I think that the distinctions between educational science and other sciences have been well made in either report. There are implications associated with both these issues.
Definitions of Science
I admire Richard Feynman’s (1999) definition of science as “the belief in the ignorance of authority” (p. 187). Unrestricted questioning is what gives science its energy and vibrancy. Values, religion, politics, vested material interests, and the like can distort our scientific work only to the extent that they stifle challenges to authority, curtailing the questioning of whatever orthodoxy exists. Unfettered, science will free itself from false beliefs or, at the least, will moderate the climate in which those beliefs exist. As politicians recognize that “facts are negotiable, perceptions are
rock solid,” so there is no guarantee that science will reduce ignorance. But as long as argument is tolerated and unfettered, that possibility exists. Another admirable definition of science was provided by Percy Bridgman (1947), who said there really is no scientific method, merely individuals “doing their damndest with their minds, no holds barred” (pp. 144–145). I admire Feynman’s and Bridgman’s definitions of science because neither confuses science with method or technique, as I believe happens in recent government proclamations about the nature of appropriate, and therefore fundable, educational research. World-renowned scientists do not confuse science with method. As Peter Medawar said, “what passes for scientific methodology is a misrepresentation of what scientists do or ought to do.” The “evidence-based practices” and “scientific research” mentioned over 100 times in the No Child Left Behind Act of 2001 are code words for randomized experiments, a method of research with which I too am much enamored. But to think that this form of research is the only “scientific” approach to gaining knowledge—the only one that yields trustworthy evidence— reveals a myopic view of science in general and a misunderstanding of educational research in particular. Although strongly supported in Congress, this bill confuses the methods of science with the goals of science. The government seems to be inappropriately diverging from the two definitions of science provided above by confusing a particular method of science with science itself. This is a form of superstitious thinking that is the antithesis of science. Feuer, Towne, and Shavelson, representing the entire NRC committee, clearly recognize this mistake, and we should all hope that they are persuasive. To me, the language in the new bill resembles what one would expect were the government writing standards for bridge building and prescription drugs, where the nature of the underlying science is straightforward and time honored. The bill fails to recognize the unique nature of educational science.
Hard and Soft Science: A Flawed Dichotomy
The distinctions between hard and soft sciences are part of our culture. Physics, chemistry, geology, and so on are often contrasted with the social sciences in general and education in particular. Educational research is considered too soft, squishy, unreliable, and imprecise to rely on as a basis for practice in the same way that other sciences are involved in the design of bridges and electronic circuits, sending rockets to the moon, or developing new drugs.
But the important distinction is really not between the hard and the soft sciences. Rather, it is between the hard and the easy sciences. Easy-to-do science is what those in physics, chemistry, geology, and some other fields do. Hard-to-do science is what the social scientists do and, in particular, it is what we educational researchers do. In my estimation, we have the hardest-to-do science of them all! We do our science under conditions that physical
scientists find intolerable. We face particular problems and must deal with local conditions that limit generalizations and theory building—problems that are different from those faced by the easier-to-do sciences. Let me explain this by using a set of related examples: The power of context, the ubiquity of interactions, and the problem of “decade by findings” interactions. Although these issues are implicit in the Feuer, Towne, and Shavelson article, the authors do not, in my opinion, place proper emphasis on them.
The Power of Contexts
In education, broad theories and ecological generalizations often fail because they cannot incorporate the enormous number or determine the power of the contexts within which human beings find themselves. That is why the Edison Schools, Success for All, Accelerated Schools, the Coalition of Essential Schools, and other school reform movements have trouble replicating effects from site to site. The decades old Follow-Through study should
have taught us about the problems of replication in education (House, Glass, McLean, & Walker, 1978). In that study, over a dozen philosophically different instructional models of early childhood education were implemented in multiple sites over a considerable period of time. Those models were then evaluated for their effects on student achievement. It was found that the variance in student achievement was larger within programs than it was between programs. No program could produce consistency of effects across sites. Each local context was different, requiring differences in programs, personnel, teaching methods, budgets, leadership, and kinds of community support. These huge context effects cause scientists great trouble in trying to understand school life. It is the reason that qualitative inquiry
has become so important in educational research. In this hardest-to-do science, educators often need knowledge of the particular—the local—while in the easier-to-do sciences the aim is for more general knowledge. A science that must always be sure the myriad particulars are well understood is harder to build than a science that can focus on the regularities of nature across contexts. The latter kinds of science will always have a better chance to understand,
predict, and control the phenomena they study. Doing science and implementing scientific findings are so difficult
in education because humans in schools are embedded in complex and changing networks of social interaction. The participants in those networks have variable power to affect each other from day to day, and the ordinary events of life (a sick child, a messy divorce, a passionate love affair, migraine headaches, hot flashes, a birthday party, alcohol abuse, a new principal, a new child in the classroom, rain that keeps the children from a recess outside the school building) all affect doing science in school settings by limiting the generalizability of educational research findings. Compared to designing bridges and circuits or splitting either atoms or genes, the science to help change schools and classrooms is harder to do because context cannot be controlled.
The Ubiquity of Interactions
Context is of such importance in educational research because of the interactions that abound. The study of classroom teaching, for example, is always about understanding the 10th or 15th order interactions that occur in classrooms. Any teaching behavior interacts with a number of student characteristics, including IQ, socioeconomic status, motivation to learn, and a host of other factors. Simultaneously, student behavior is interacting with
teacher characteristics, such as the teacher’s training in the subject taught, conceptions of learning, beliefs about assessment, and even the teacher’s personal happiness with life. But it doesn’t end there because other variables interact with those just mentioned— the curriculum materials, the socioeconomic status of the community,
peer effects in the school, youth employment in the area, and so forth. Moreover, we are not even sure in which directions the influences work, and many surely are reciprocal. Because of the myriad interactions, doing educational science seems very difficult, while science in other fields seems easier. I am sure were I a physicist or a geologist I would protest arguments from outsiders about how easy their sciences are compared to mine. I know how “messy” their fields appear to insiders, and that arguments about the status of findings and theories within their disciplines can be fierce. But they have more often found regularities in nature across physical contexts while we struggle to find regularities across social contexts. We can make this issue about the complexity we face more concrete by using
the research of Helmke (cited in Snow, Corno & Jackson, 1995). Helmke studied students’ evaluation anxiety in elementary and middle school classrooms. In 54 elementary and 39 middle school classrooms, students’ scores on questionnaires about evaluation anxiety were correlated with a measure of student achievement. Was there some
regularity, some reportable scientific finding? Absolutely. On average, a negative correlation of modest size was found in both elementary and middle school grades. The generalizable finding was that the higher the scores on the evaluation anxiety questionnaire, the lower the score on the achievement test. But this simple scientific finding totally misses all of the complexity in the classrooms studied. For example, the negative correlations ran from about ?.80 to zero, but a few were even positive, as high as +.45. So in some classes students’ evaluation anxiety was so debilitating that their achievement was drastically lowered, while in other classes the effects were nonexistent. And
in a few classes the evaluation anxiety apparently was turned into some productive motivational force and resulted in improved student achievement. There were 93 classroom contexts, 93 different patterns of the relationship between evaluation anxiety and student achievement, and a general scientific conclusion that completely missed the particularities of each classroom situation. Moreover, the mechanisms through which evaluation anxiety resulted in reduced student achievement appeared to be quite different in the elementary classrooms as compared to the middle
school classrooms. It may be stretching a little, but imagine that Newton’s third law worked well in both the northern and southern hemispheres—except of course in Italy or New Zealand—and that the explanatory basis for that law was different in the two hemispheres. Such complexity would drive a physicist crazy, but it is a part of the day-to-day world of the educational researcher. Educational researchers have to accept the embedded-ness of educational phenomena in social life, which results in the myriad interactions that complicate our science. As Cronbach
once noted, if you acknowledge these kinds of interactions, you have entered into a hall of mirrors, making social science in general, and education in particular, more difficult than some other sciences. Decade by Findings
There is still another point about the uniqueness of educational science, the short half-life of our findings. For example, in the 1960s good social science research was done on the origins of achievement motivation among men and women. By the 1970s, as the feminist revolution worked its way through society, all data that described women were completely useless. Social and educational research, as good as it may be at the time it is done, sometimes shows these “decade by findings” interactions. Solid scientific findings in one decade end up of little use in another
decade because of changes in the social environment that invalidate the research or render it irrelevant. Other examples come to mind. Changes in conceptions of the competency of young children and the nature of their minds resulted in a constructivist paradigm of learning replacing a behavioral one, making irrelevant entire journals of scientific behavioral findings about educational phenomena. Genetic findings have shifted social views about race, a concept now seen as worthless in both biology and anthropology. So previously accepted social science studies about differences between the races are irrelevant because race, as a basis for classifying people in a research study, is now understood to be socially, not genetically, constructed. In all three cases, it was not bad science that caused findings to become irrelevant. Changes in the social, cultural, and intellectual environments negated the scientific work in these areas. Decade by findings interactions seem more common in the social sciences and education than they do in other scientific fields of inquiry, making educational science very hard to do.
The remarkable findings, concepts, principles, technology, and theories we have come up with in educational research are a triumph of doing our damndest with our minds. We have conquered enormous complexity. But if we accept that we have unique complexities to deal with, then the orthodox view of science now being put forward by the government is a limited and faulty one. Our science forces us to deal with particular problems, where local knowledge is needed. Therefore, ethnographic research is crucial, as are case studies, survey research, time series, design experiments, action research, and other means to collect reliable evidence for engaging in unfettered argument about education issues. A single method is not what the government should be promoting for educational researchers. It would do better by promoting argument, discourse, and discussion. It is no coincidence that early versions of both democracy and science were invented simultaneously in ancient Greece. Both require the same freedom to argue and question authority, particularly the government. It is also hard to take seriously the government’s avowed desire
for solid scientific evidence when it ignores the solid scientific evidence about the long-term positive effects on student learning of high-quality early childhood education, small class size, and teacher in-service education. Or when it ignores findings about the poor performance of students when they are retained in grade, assigned uncertified teachers or teachers who have out-of-field teaching assignments, or suffer a narrowed curriculum
because of high-stakes testing. Instead of putting its imprimatur on the one method of scientific inquiry to improve education, the government would do far better to build our community of scholars, as recommended in the NRC report. It could do that by sponsoring panels to debate the evidence we have collected from serious scholars using
diverse methods. Helping us to do our damndest with our minds by promoting rational debate is likely to improve education more than funding randomized studies with their necessary tradeoff of clarity of findings for completeness of understanding. We should never lose sight of the fact that children and teachers in classrooms are conscious, sentient, and purposive human beings, so no scientific explanation of human behavior could ever be complete.
In fact, no un-poetic description of the human condition can ever be complete. When stated this way, we have an argument for heterogeneity in educational scholarship and for convening panels of diverse scholars to help decide what findings are and are not worthy of promoting in our schools. The present caretakers of our government would be wise to remember Justice Jackson’s 1950 admonition: “It is not the function of our government to keep the citizens from falling into error; it is the function of the citizen to keep the government from falling into error.” Promoting debate on a variety of educational issues among researchers and practitioners with different methodological perspectives would help both our scholars and our government to make fewer errors. Limiting who is funded and who will be invited to those debates is more likely to increase our errors.
Excerpt From Essay:
Essay Instructions: 1) Requesting the same gentleman who wrote the previous Research Paper for me as I need for him to "fine tune" his previous paper.
2) Review must address the following:
a) address the educational issue/problem of
Students with Visual Impairments: Inclusion or Schools
for the Blind
b) address topics & subtopics NOT simply to summarize what
others have already found
c) 21 research articles from 1995 to present
d) include: introduction, body to include subtopics & what
reseach sais about them, a conclusion that summarizes
the findings & addresses limitations and sugestions
for future research
e) APA style
f) incorporate quotes, citations...not overly done,
Can you please attach a printout of the sources utilized so that I can go in and further edit, if needs be? This would be helpful vs. my searching under the title of the Journal and having to subscribe to open anything up!
I shall e-mail further sources as some are from before and some are newer.
PLEASE ENSURE THAT THE PAPER ATTACHES PROPERLY THE FIRST TIME...THE LAST TIME IT WAS NOT AND IT CAUSED A FURTHER DELAY! THANK YOU.
There are faxes for this order.
Excerpt From Essay:
Essay Instructions: Conduct a literature search and select one article, published in the last three years, addressing one of the following educational issues: finance, governance, curriculum, changing demographics, instructional practices, or issues and challenges present in urban versus rural school districts. In a 1050-word paper, summarize the selected article and analyze how this issue might affect Elementery public school in San Francisco CA.
Excerpt From Essay:
I really do appreciate HelpMyEssay.com. I'm not a good writer and the service really gets me going in the right direction. The staff gets back to me quickly with any concerns that I might have and they are always on time.
I have had all positive experiences with HelpMyEssay.com. I will recommend your service to everyone I know. Thank you!
I am finished with school thanks to HelpMyEssay.com. They really did help me graduate college..