Design & Build Custom Assessments
Create, share, review, edit, and innovate with Editor and Content Builder, the Kite Suite's advanced item banking and content authoring tools. Our solution follows Accessible Portable Item Protocol® (APIP®) and Question and Test Inoperability® (QTI®) standards and supports cross-platform collaboration. Choose item types such as simulations, technology-enhanced items, selected response, and more.
Create Items, Media, and Assessments Using our Content Builder Authoring Tool
A powerful QTI certified application for item, media, and test creation that allows for item banking, accessibility tagging, and test publishing.
A companion to Content Builder, Editor is used to build technology-enhanced item types such as labeling, select text, simulations, and much more.
Choose from Standard Item Types or Our Innovative Task Packages
The Kite system supports simulations that model certain situations in science, engineering, biomedical science, and computer science. Simulations allow students to enter values and then run the simulation to and answer the accompanying questions. Select from all available item types in Kite Content Builder to be used for the questions.
Leverage the technology of online assessments to better assess content. These items are machine-scorable but often mimic items types that previously required hand-scoring. Generate answers to a prompt using constructed response items. For tasks such as labeling, sorting, creating a Venn or Euler diagram, and ordering, there may be a finite subset of possibilities (such as a word bank or other provided text or images), but the test taker must still perform a generative operation to respond. Other tasks such as partitioning, graphing and plotting, or Punnett squares, may have a vastly larger set of possible responses.
• Labeling. Move blocks of text onto an image or into a table. The item may ask students to identify parts of a diagram or to map or indicate congruent sides or angles.
• Ordering and Sorting. Arrange images, text, numbers, etc., into a sequence or categories. Ordering items may ask students to correctly represent a series or cycle of events, steps in a process, events in a story, or the order of numbers or expressions. Sorting items ask student to categorize objects by a feature or characteristic.
• Text Entry (Fill in the Blank). Enter a response that is matched to an expected result. Alternative correct answers may be defined and scored as either correct or partially correct. Expected results can be direct text comparisons (e.g., “16.2,” or “Ohm’s Law”) or can follow programmable rules (e.g., “multiples of 3 that are also less than 20.”)
• Diagram. Sort items into appropriate sectors of a Venn or Euler diagram.
• Punnett Square. Construct and place alleles and genotypes in the appropriate locations on a blank Punnett square.
• Graph. Build a graph with provided data. An item may ask students to create a straight line given certain parameters (such as plotting two points or a point and a slope) or to plot points on a line graph or coordinate grid.
• Partition. Divide an object into sections, then select parts to model equivalent fractions.
Kite Suite leverages the power of online assessments to validly measure content using technology-enhanced items. Item types include the following:
• Matrix Items. Answer choices are presented in a tabular format, with a row of buttons for each answer choice. For example, a math example might have three columns with >, <, and = as headings and then have several expressions of the form 5/3 8/6. The test taker selects the button in the column that corresponds to the equality or inequality statement that makes the expression true.
• Drop Down. Select a word, phrase, number, symbol, or expression to complete a statement or expression, substitute a more appropriate word, or correct a spelling error.
• Matching Lines. Connect ideas, themes, statements, numbers, expressions, solutions, etc., with supporting evidence, definitions, equivalent expressions, etc.
• Select Text. Choose words, lines, or complete sentences from a text that support a claim, provide evidence, represent a particular literary technique, or provide extraneous detail in a mathematics problem. For example, test takers read an informational paragraph and then must identify three details in the paragraph that support the author’s thesis.
Choose from provided answer choices. The possible set of answer choices is tightly constrained, so there may be more latitude in responding than in a traditional multiple-choice item. Selected response item types include the following:
• Multiple Choice (MC). A traditional question with a fixed number of options (usually four or five) and a single correct answer.
• Multi-Select Multiple Choice. Choose more than one correct answer from a list of possible answer choices. Various scoring options are available including “correct only” and “partial credit.”
• Two-Part. Two related questions presented, with the second question asking for additional information or support for the response to the first question. For example, part 1 of a chemistry lab simulation may ask to identify whether a change to a substance is a chemical or physical change. Part 2 in this item type then asks to identify the observation from the lab that supports the response to part 1.
• Situational Judgment Task (SJT). An item commonly used to measure transportable skills, such as conflict resolution or leadership, or any other instance that requires indirect measurement of behavior. An SJT item presents a scenario or hypothetical situation; the answer choices reflect degrees of correctness, insight, or sophistication or may represent different approaches to problem-solving or a different degree of needed intervention. Answer choices are scaled based on expert judgment of the appropriateness of the response.
Kite Content Builder supports a variety of performance tasks, many of which can be delivered in a standardized fashion and can be scored by classroom teachers in a standardized manner using supplied rubrics or scoring guides.
• Extended Constructed Response (Essay Items). Read text and take notes on one or more resources (e.g., text or graphics) in Kite Student Portal, then use the notes to create an essay or short answer related to one or more prompts.
• Speech Capture. View or listen to a set of stimulus resources and then speak into a microphone to answer a set of question prompts. These responses are recorded and can be scored later by a human scorer.
• Dependent Items. Enter short responses and show their work in a series of multiple and dependent questions that assess the test taker’s reasoning.
• Activity Evidence. Complete assignments and upload them to the Kite system. A scorer then evaluates the materials according to a rubric, or a test administrator scores them. The Kite system supports the use of both holistic and analytic rubrics for evaluation.
The Kite Suite stores the statistical data of your items so you can maximize the utility of every item as a psychometric measurement instrument. Use technology-enhanced, innovative item functionality and strategic scoring logic to expand the parameters, breadth, and depth of measurable content knowledge.
Metadata & Framework Alignment
Manual & Machine ScoringItem developers can apply unique setup configurations with user friendly controls for correct-response and machine-score modeling rules. Attach statistics to each item to support test form equating, adaptive, and analytics purposes. Assign scorers, monitor scoring progress, and input scores for assessments or items that require human or external scoring.
Internal & External Content Review
Validate your items with voting, commenting, and standards alignment selection recommendations directly within the system. The Kite Suite allows you to collect and evaluate valuable feedback in one location to help manage your test content.