Submission Guidelines

Prediction:
You have to submit a single csv: the first column should contain the <candidate id> of the candidate as provided in the validation set. Provide the prediction for each candidate according to the "Conservatives' Predictor" in the second column and "Liberals' Predictor" in the third column. Mark each candidate as BP, MP or LP in each of the two columns.

Interpretability:
In a document of less than 5 A4 pages (Font: Times New Roman, Size: 10, Single line spacing) let us know what insight can your predictor give about the problem at hand. Tell us how you extracted these insights from the predictor.

Methodology:
We will like to know how you approached the problem and what kind of learning techniques you applied. Describe your ideas in a document of less than 3 A4 pages (Font: Times New Roman, Size: 10, Single line spacing).

A window of 2 weeks shall open up to upload your files to our servers. The window shall close on the submission deadline. You shall receive an email as soon as the window is open.

Policy

Please do not share any confidential or proprietary data with us. All entries to the competition become a property of Aspiring Minds. Aspiring Minds may or may not use them for their internal research. Aspiring Minds reserves the right to publically share these entries for any purpose. The details of the participants shall be kept completely confidential.

Grading

Experts at Aspiring Minds and Dr. Una-May O'Reilly, EvoDesignOpt, CSAIL, MIT shall judge the final entries.

The grading shall have two parts.

a. Prediction accuracy (70% weight): We will calculate your predictions against actual data for the validation set. Better the prediction accuracy, better the filter.

Scoring for Conservatives' Predictor Scoring for Liberals' Predictor
True/Predicted LP MP BP
LP 0 -2 -4
MP -1 0 -1
BP -2 -1 0
True/Predicted LP MP BP
LP 0 -1 -2
MP -1 0 -1
BP -4 -2 0

For example, for a conservative filter, if a LP is predicted as a BP, you get penalized by -4, but if a BP is predicted as a LP, you get penalized lesser (-2).
Sum of both the scores shall be the total score in this section.

b. Interpretability (15% weight): This shall be evaluated according to the insight your model can provide about what and how parameters really make some one perform well vs. not*. For instance, if your model is human comprehensible, can be written in a compact fashion and you can explain its meaning to a lay non-computer scientist, you got it right! On the other hand, if your model can provide some directional information about parameters, such as which is important, which is not or what way a particular parameter influences the decision, we will be happy to read about it.

c. Methodology (15% weight): Here we would like to understand why you decided to take a particular approach to solve the problem at hand, how you implemented it, any challenges you faced and how you solved them. The grading shall be basis on the clarity and correctness of the approach, Extra marks shall be given for being creative!
Good luck!

*We understand correlation is not causality, yet we believe models can provide insights which can later be tested for causality.