Annotation of loncom/html/adm/help/tex/Custom_Response_Problem_Creation.tex, revision 1.7

1.7     ! damieng     1: \label{Custom_Response_Problem_Creation}\index{Custom Response}
1.1       lira        2: Custom Response is a way to have a problem graded based on an algorithm. The use of this response type is
                      3: generally discouraged, since the responses will not be analyzable by the LON-CAPA statistics tools.
                      4: 
                      5: For a single textfield, the student's answer will be in a variable \$submission. If the Custom Response has multiple textfields, the answers will be in an array
                      6: reference, and can be accessed as \$\$submission[0], \$\$submission[1], etc.
                      7: 
1.3       raeburn     8: The student answer needs to be evaluated by Perl code inside the \texttt{$<$ answer$>$}-tag. Custom Response needs to include an algorithm that determines and returns a standard LON-CAPA response. The most common LON-CAPA responses are:
1.1       lira        9: \begin{itemize}
                     10: \item EXACT\_ANS: return if solved exactly correctly
                     11: \item APPROX\_ANS: return if solved approximately
                     12: \item INCORRECT: return if not correct, uses up a try
1.2       lira       13: \item ASSIGNED\_SCORE: partial credit (also return the credit factor, \\
                     14: e.g. return(ASSIGNED\_SCORE,0.3);)
1.1       lira       15: \item SIG\_FAIL, NO\_UNIT, EXTRA\_ANSWER, MISSING\_ANSWER, BAD\_FORMULA, 
                     16: WANTED\_NUMERIC, WRONG\_FORMAT: return if not correct for different reasons, does not use up a try
                     17: \end{itemize}
1.6       raeburn    18: The \texttt{answerdisplay} is shown instead of the student response in `show answer' mode after the answer date.
1.1       lira       19: The following example illustrates this:
                     20: \begin{verbatim}
                     21: <problem>
                     22: <startouttext />Accept an answer of around 90 or -90<endouttext />
                     23:   <customresponse answerdisplay="something near 90 or -90">
                     24:     <answer type="loncapa/perl">
1.2       lira       25: # This examples uses perl 'regular expressions' for string evaluation. 
                     26: # Consult a perl reference for help understanding the regular expressions.
1.1       lira       27: # We do not want a vector
                     28: if ($submission=~/\,/) { return 'EXTRA_ANSWER'; }
                     29: # Need a numerical answer here
                     30: if ($submission!~/^[\d\.\-\e]+$/i) { return 'WANTED_NUMERIC'; }
                     31: $difference=abs(90-abs($submission));
                     32: if ($difference==0) { return 'EXACT_ANS'; }
                     33: if ($difference < 0.1) { return 'APPROX_ANS'; }
                     34: return 'INCORRECT';</answer>
                     35:     <textline readonly="no" />
                     36:   </customresponse>
                     37: </problem>
                     38: \end{verbatim}
                     39: 
                     40: 
                     41: Full list of possible return codes:
                     42: \begin{itemize}
                     43: \item EXACT\_ANS: student is exactly correct
                     44: \item APPROX\_ANS:  student is approximately correct
                     45: \item NO\_RESPONSE: student submitted no response
                     46: \item MISSING\_ANSWER: student submitted some but not all parts of a response
                     47: \item EXTRA\_ANSWER: student submitted a vector of values when a scalar was expected
                     48: \item WANTED\_NUMERIC: expected a numeric answer and didn't get one
                     49: \item SIG\_FAIL: incorrect number of Significant Figures
                     50: \item UNIT\_FAIL: incorrect unit
                     51: \item UNIT\_NOTNEEDED: submitted a unit when one shouldn't
                     52: \item UNIT\_INVALID\_INSTRUCTOR: the unit provided by the author of the problem is unparsable
                     53: \item UNIT\_INVALID\_STUDENT: the unit provided by the student is unparasable
                     54: \item UNIT\_IRRECONCIBLE: the unit from the student and the instructor are of different types
                     55: \item NO\_UNIT: needed a unit but none was submitted
                     56: \item BAD\_FORMULA: syntax error in submitted formula
                     57: \item WRONG\_FORMAT: student submission did not have the expected format
                     58: \item INCORRECT: answer was wrong
                     59: \item SUBMITTED: submission wasn't graded
                     60: \item DRAFT: submission only stored
                     61: \item MISORDERED\_RANK: student submitted a poorly order rank response
                     62: \item ERROR: unable to get a grade
                     63: \item ASSIGNED\_SCORE: partial credit; the customresponse needs to return the award followed by the partial credit factor  
                     64: \item TOO\_LONG: answer submission was deemed too long
                     65: \item INVALID\_FILETYPE: student tried to upload a file that was of an extension that was not specifically allowed 
1.5       raeburn    66: \item EXCESS\_FILESIZE: student uploaded file(s) with a combined size that exceeded the amount allowed
1.1       lira       67: \item COMMA\_FAIL: answer requires the use of comma grouping and it wasn't provided or was incorrect
                     68: \end{itemize}

FreeBSD-CVSweb <freebsd-cvsweb@FreeBSD.org>