As researchers and as a department actively involved in ensuring that privacy laws (the Family Educational Rights and Privacy Act, and the Protection of Pupil Rights Amendment, in particular) are enforced in our district, we certainly appreciate concerns about the confidentiality of survey responses. Some of our surveys (e.g., the annual parent survey) are truly anonymous. For these, we create an online survey, generate a link, and distribute that link to persons of interest (e.g., teachers, parents, and the general public). The only information known about respondents is their responses to survey items, along with their computer’s I.P. address (i.e., the number assigned by a service provider, such as RoadRunner, SBC Global, or AISD).
Much of the work we do with employees is targeted at particular groups (e.g., bilingual teachers, high school principals, central office employees). To minimize the number of surveys sent to district employees, we use district databases to pull email addresses by work location or job role, upload a list of potential respondents to our online survey provider, and then distribute surveys to the appropriate groups. The online systems we use for this type of survey can track respondents to give us current information about response rates and send follow-up reminders to those who have not yet taken a survey. Targeting non-responders helps us avoid adding to email overload and sending emails to those who have already responded.
Please note that in this type of survey, and also for surveys in which respondents are required to identify themselves, responses will never be linked with the survey taker’s name or email address for reporting purposes. Although it is theoretically possible to view an individual person’s responses, the only people who will ever have access to the complete data file that results from a survey are the DRE staff. DRE staff are ethically and legally bound to maintain confidentiality. To ensure the confidentiality of our respondents, after a survey has been closed and responses downloaded, identifying information is removed.
In cases where our work requires us to track people over time (e.g., research on teacher attrition or student mobility), confidentiality of individuals is maintained at all times, and only aggregate data are reported. Additionally, to further protect the identity of individual respondents, we carefully analyze the cell sizes (i.e., number of people) of any subgroups before subdividing results in our reports. Results for cell sizes that are deemed too small are not reported.
We know that many parents, students, and AISD employees look forward to the opportunity to express their views and provide feedback about important issues that affect their lives. The data obtained from our surveys are used to inform district, state, and even federal practices and policies. Participation in DRE surveys is always voluntary, and respondents always have the option to skip any items they do not wish to answer or to opt out of any survey. However, feedback is very valuable and we hope people will take the time to participate.
If you have any questions, please contact the DRE staff at (512) 414-1724.
The Public Education Information Management System (PEIMS) encompasses all data requested and received by the Texas Education Agency (TEA) about public education, including students’ demographics and academic performance, and personnel, financial, and organizational information. For the PEIMS electronic collection, school districts submit their data via standardized computer files, as defined by the PEIMS Data Standards.
Data from the PEIMS are used to evaluate Texas school districts and campuses through the Texas Academic Performance Report. The TAPR reports pull together a wide range of information about the performance of students in each school and district in Texas annually.
In compliance with the Texas Education Code, the PEIMS contains only the data necessary for the legislature and TEA to perform their legally authorized functions in overseeing public education.
The data in various reports can reflect different points in time, different student or staff groups, different decision rules for data inclusion, and different sources of similar information. Here are some examples of why seemingly similar data from two reports may look different.
Some reports include results for students who were enrolled on the last Friday in October, the official state snapshot date for the fall PEIMS submission (see What is PEIMS?), while other reports include data for every student who was enrolled at any time during the school year.
Some reports indicate the percentage of students who passed a test (e.g., STAAR) after all possible attempts, while other reports indicate the percentage of students who passed the test on the first try. Analysis decisions depend on the purpose of the report.
Some analyses use self-reported data (e.g., survey reports that disaggregate results based on self-reported gender, ethnicity, or grade level), while other reports use information contained in district databases.
Staff in DRE are careful to document the source of all data and the decision rules that are used in each report.
The main criterion for whether or not a program is evaluated is the existence of funding for evaluation. Many state, federal, and foundation grants that financially support district programs require that these programs be evaluated, and designate funds for that purpose.
The Texas Education Agency (TEA) collects and reports a wide variety of information about student outcomes each year at the campus, district, and state levels through its Public Education Information Management System (PEIMS). Performance on statewide assessments (i.e., STAAR) is disaggregated by subgroups, including ethnicity, gender, special education, low-income status, English language learner [ELL] status, and other groups. View standard PEIMS reports.
Texas Academic Performance Reports (TAPR) use PEIMS data to summarize information about students’ enrollment and demographics, completion status, attendance rates, advanced coursework, and testing participation. The reports, available annually, also include information about school staffing, finance and budgets, and special programs. View more information about TAPR.
TEA's Accountability Rating System for Texas public schools and school districts uses a subset of the performance measures computed for TAPR to assign a rating to each public school and district. View information about current and past school ratings.
Program evaluation is a series of activities that can be helpful in program or project management. These activities can include conducting a needs assessment, monitoring program activities, measuring program outcomes, informing stakeholders’ decisions about program improvement or continuation, and planning future projects.
At the request of AISD Special Education Department staff, a report was written by DRE staff about using program evaluation for program sustainability. View the full report.
When it is not feasible to survey an entire population, samples are used instead. However, when a sample is used to make inferences about a population, the results must be interpreted with caution. For example, although 87% of a sample may select a particular survey response, this does not necessarily mean 87% of the entire population feels the same way. To interpret sample data cautiously, the following information is used to construct an interval that describes the range within which the result for a sample population is likely to fall:
The 95% confidence interval is commonly used, indicating you can be 95% confident the true population result will fall within the constructed interval.
To make inferences about a population based on sample data, the results should be interpreted with the computed confidence interval. For example, 64% of respondents indicated a parent/guardian talked with them about the dangers of tobacco, alcohol, or drug use. The computed confidence interval is 5 percentage points. Applying the confidence interval (i.e., adding and subtracting 5 percentage points) to the 64% yields a range of 59% to 69%. Therefore, you can be 95% confident that somewhere between 59% and 69% of the population would indicate a parent/guardian talked with them about the dangers of tobacco, alcohol, or drug use.
- Population size (e.g., the number of students on a campus)
- Sample size (e.g., the number of survey respondents at a campus)
- Desired level of confidence (e.g., 95%)
In formative evaluation, programs or projects are typically assessed during their development or early implementation to provide information about how best to revise and modify for improvement. This type of evaluation often is helpful for pilot projects and new programs, but can be used for progress monitoring of ongoing programs. In summative evaluation, programs or projects are assessed at the end of an operating cycle, and findings typically are used to help decide whether a program should be adopted, continued, or modified for improvement.
Both evaluation methods are recommended for use, when possible, to provide program staff with ongoing feedback for program modifications (formative) as well as periodic review of long-term progress on major program goals and objectives (summative), and to meet regular reporting requirements (e.g., for a grantor, agency, or organizational manager).
Our data come from different sources, depending on our needs. To monitor some of AISD’s initiatives, we collect our own data through districtwide surveys, such as campus staff and student climate surveys, parent surveys, and surveys or focus groups with program participants. For more information on AISD’s surveys, please go to District and campus surveys.
For data elements (e.g., student demographics, student attendance, discipline, and course credits earned), we use official information collected for the Texas Education Agency’s (TEA) Public Education Information Management System (PEIMS). PEIMS data are requested and received by TEA from all public schools in Texas. Data are submitted multiple times per year by all schools in Texas, and summary reports of these data are available to the public through Texas Academic Performance Reports (TAPR) on the TEA website.
We also use student data that are collected and stored in the district’s student information system (TEAMS). This includes data elements that are not necessarily reported to the TEA, such as course period attendance and course grades.
Data specific to programs we evaluate may also come from other sources, such as program or grant managers, program-specific data collection systems, and external data sources (e.g., the National Student Clearinghouse’s postsecondary enrollment tracking system).
We often link students’ program participation and other data to results for assessments such as the State of Texas Assessments of Academic Readiness (STAAR), SAT, and ACT. STAAR data are made available to us by TEA and Educational Testing Service (i.e., the company that created many of these tests). SAT and ACT data are made available to us by their respective testing agencies. Other assessments are scored in AISD and the results are made available through the district’s information system.
Statistical significance is a topic that can arise when differences in outcomes (e.g., average scores or proportions) are being compared. For example, is the difference between the average test scores before and after students participated in a program statistically significant? The difference between two average scores can be great yet not statistically significant, or two average scores can differ only slightly yet be statistically significant.
To determine statistical significance, we use a mathematical calculation to find out whether the difference between the outcomes of interest is greater or lesser than would be expected by chance alone. A statistically significant finding supports the claim that the result did not occur by chance.
Various factors can influence the determination of statistical significance: sample size (i.e., how many cases were included in the analysis); variance (i.e., the spread of scores within the distribution); skewness (i.e., the degree to which scores might be clustered on one side of a central tendency and trail out); and kurtosis (i.e., the extent to which the distribution of scores departs from a bell-shaped curve).
Significant in this context does not necessarily mean important. When a sample size is large, small differences can be found to be statistically significant. These differences might be too small to be considered important by the interested party.