UPSI Digital Repository (UDRep)
Start | FAQ | About

QR Code Link :

Type :thesis
Subject :QA Mathematics
Main Author :Muhammad Huzaifah Ismail
Title :The development of Automatic Programming Assessment Tool (APAT) that applies learning Taxonomy as its grading model
Place of Production :Tanjong Malim
Publisher :Fakulti Seni, Komputeran dan Industri Kreatif
Year of Publication :2022
Corporate Name :Universiti Pendidikan Sultan Idris
PDF Guest :Click to view PDF file
PDF Full Text :Login required to access this item.

Abstract : Universiti Pendidikan Sultan Idris
Currently, it is difficult to effectively grade students’ programming assignments. As a result, the objective of this work was to create an Automatic Programming Assessment Tool (APAT) with a Bloom Taxonomy-mapped grading rubric. To guarantee that such a novel tool has appropriate quality attributes, the development of APAT was carried out based on the Software Engineering (SE) principles, namely software specification, software development, and software verification. The evaluation of this novel tool focused on its usability and effectiveness. The assessment of the tool’s usability was carried out using Heuristic Assessment involving eight lecturers from the Faculty of Art, Computing, and Creative Industry, Sultan Idris Education University where data were gathered through WebUSE. The assessment of the tool’s effectiveness in assessing student learning was performed through Analysis of Variance (ANOVA). The results of the analysis of the survey data showed that the lecturers gave the proposed prototype a high rating. The findings of the ANOVA test revealed that there were significant differences in the learning outcomes of the students between groups. Overall, according to both findings, APAT is highly usable and effective from the standpoints of practicality and assessment, respectively. Thus, teaching professionals can use this innovative assessment tool to enhance the grading of students' programming works.

References

[Diploma] Guide Book. (2019). Retrieved August 6, 2019, from http://fskik.upsi.edu.my/wp-content/uploads/2019/06/BPDIPLOMA_20192020.pdf 

About Judge0. (n.d.). Retrieved June 30, 2021, from https://github.com/judge0/judge0 

About Moodle. (2018). Retrieved November 11, 2018, from https://docs.moodle.org/35/en/About_Moodle 

Adams, M. D. (2017). Aristotle: A flexible open-source software toolkit for semi-automated marking of programming assignments. 2017 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing, PACRIM 2017 - Proceedings, 2017-Janua, 1–6. https://doi.org/10.1109/PACRIM.2017.8121888 

Agarwal, R., & Venkatesh, V. (2002). Assessing a Firm ’ s Web Presence : A Heuristic Evaluation Procedure for the Measurement of Usability. Information Systems Research, (February 2018), 168–186. https://doi.org/https://doi.org/10.1287/isre.13.2.168.84 

Ahoniemi, T., & Reinikainen, T. (2006). ALOHA - A Grading Tool for Semi-Automatic Assessment of Mass Programming Courses. Proceedings of the 6th Baltic Sea Conference on Computing Education Research (Koli Calling 2006), (February), 139–140. https://doi.org/10.1145/1315803.1315830 

Al-Khanjari, Z. A., Fiaidhi, J. A., Al-Hinai, R. A., & Kutti, N. S. (2010). PlagDetect: A Java Programming Plagiarsim Detection Tool. ACM Inroads, 1(4), 66. https://doi.org/10.1145/1869746.1869766 

Alsumait, A., & Al-Osaimi, A. (2009). Usability heuristics evaluation for child e-learning applications. IiWAS2009 - The 11th International Conference on Information Integration and Web-Based Applications and Services, 425–430. https://doi.org/10.1145/1806338.1806417 

Anderson, L. W. (2003). Introduction to classroom assessment. In Classroom Assessment Enhancing the Quality of Teacher Decision Making (p. 199). Lawrence Erlbaum Associates, Inc. 

Andrade, H. (2007). Self-Assessment Through Rubrics. Educational Leadership, 65(4), 60–63. https://doi.org/10.1016/j.neuropharm.2005.02.010 

Bachelor of COMPUTER SCIENCE (SOFTWARE ENGINEERING) (HONS.). (n.d.). Retrieved May 25, 2021, from https://www.uniten.edu.my/programmes/computing-informatics/bachelor-of-computer-science-software-engineering-hons/ 

Batool, A., Motla, Y. H., Hamid, B., Asghar, S., Riaz, M., Mukhtar, M., & Ahmed, M. (2013). Comparative Study of Traditional Requirement Engineering and Agile 

Requirement Engineering. In International Conference on Advanced Communications Technology (pp. 1006–1014). 

Becker, K. (2003). Grading Programming Assignments using Rubrics, 58113. 

Bevan, N. (1995). Measuring usability as quality of use. Software Quality Journal, 115–130. https://doi.org/https://doi.org/10.1007/BF00402715 

Bixler, B. (2007). Psychomotor Domain Taxonomy. Retrieved November 11, 2018, from http://users.rowan.edu/~cone/curriculum/psychomotor.htm 

Black, P., & William, D. (2010). Inside the black box: Raising standards through classroom assessment. https://doi.org/10.1177/ 003172171009200119 

Bloom’s Taxonomy of Learning Domains. (2006). Retrieved November 11, 2018, from https://www.nbna.org/files/Blooms Taxonomy of Learning.pdf 

Bloom, B. S., Engelhart, M. D., Furst, E. J., Hill, W. H., & Krathwohl, D. R. (1956). Taxonomy of educational objectives, Handbook 1: Cognitive domain. New York: Longman. 

Bloom, B. S., Krathwohl, D. R., & Masia, B. B. (1964). Taxonomy of Educational Objectives. The classification of educational goals. Handbook 2: Affective Domain. David McKay Company, Inc. 

Buckley, J., & Exton, C. (2003). Bloom’s taxonomy: A framework for assessing programmers’ knowledge of software systems. Proceedings - IEEE Workshop on Program Comprehension, 2003-May, 165–174. https://doi.org/10.1109/WPC.2003.1199200 

Caiza, J. C., & Alamo, J. M. Del. (2013). PROGRAMMING ASSIGNMENTS AUTOMATIC GRADING : REVIEW OF TOOLS AND IMPLEMENTATIONS. In International Technology, Education and Development Conference. 

Cateté, V., Snider, E., & Barnes, T. (2016). Developing a Rubric for a Creative CS Principles Lab, 290–295. 

Cheang, B., Kurnia, A., Lim, A., & Oon, W. C. (2003). On automated grading of programming assignments in an academic institution. Computers and Education, 41(2), 121–131. https://doi.org/10.1016/S0360-1315(03)00030-7 

Chen, H., Kazman, R., & Haziyev, S. (2016). Strategic Prototyping for Developing Big Data Systems. 

Chiew, T. K., & Salim, S. S. (2003). Webuse: Website usability evaluation tool. Malaysian Journal of Computer Science, 16(1), 47–57. 

Choudhury, P. R., Wats, N., Jaiswal, R., & Goudar, R. H. (2014). Automated Process for Assessment of Learners Programming Assignments. In International Conference on Intelligent Systems and Control: Green Challenges and Smart Solutions (pp. 281–285). https://doi.org/10.1109/ISCO.2014.7103960 

Chung Man, T., & Y.T., Y. (2013). An Exploratory Study on Instructors Agreement on the Correctness of Computer Program Outputs. In S. K. S. Cheung, J. Fong, W. Fong, W. Fu Lee, & K. Lam For (Eds.), Hybrid Learning and Continuing Education (pp. 69–80). Springer Heidelberg Dordrecht London NewYork. https://doi.org/10.1007/978-3-642-39750-9 

CodeIgniter Overview. (n.d.). Retrieved July 3, 2021, from https://www.codeigniter.com/userguide3/overview/mvc.html 

Cronbach, L. J. (1990). Essentials of Psychological Testing. Harpercollins College Div; Subsequent edition. 

Cullinane, A. (2010). Bloom ’ s Taxonomy and its Use in Classroom Assessment. Resource & Research Guides, 1(10), 2009–2010. 

Daud, N. M. N., Bakar, N. A. A. A., & Rusli, H. M. (2010). Implementing Rapid Application Development (RAD) methodology in developing practical training application system. Proceedings 2010 International Symposium on Information Technology - System Development and Application and Knowledge Society, ITSim’10, 3, 1664–1667. https://doi.org/10.1109/ITSIM.2010.5561634 

DigitalOcean. (n.d.). Retrieved August 22, 2021, from https://www.digitalocean.com/products/droplets/ 

Dixson, D. D., & Worrell, F. C. (2016). Formative and Summative Assessment in the Classroom. Theory into Practice, 55(2), 153–159. https://doi.org/10.1080/00405841.2016.1148989 

Dogan, C. D., & Uluman, M. (2017). A comparison of rubrics and graded category rating scales with various methods regarding raters’ reliability. Kuram ve Uygulamada Egitim Bilimleri, 17(2), 631–651. https://doi.org/10.12738/estp.2017.2.0321 

Dosilovic, H. Z., & Mekterovic, I. (2020). Robust and scalable online code execution system. 2020 43rd International Convention on Information, Communication and Electronic Technology, MIPRO 2020 - Proceedings, 1627–1632. https://doi.org/10.23919/MIPRO48935.2020.9245310 

Douce, C., Livingstone, D., & Orwell, J. (2005). Automatic test-based assessment of programming. Journal on Educational Resources in Computing, 5(3), 4-es. https://doi.org/10.1145/1163405.1163409 

Dowson, M. (1997). The Ariane 5 software failure. ACM SIGSOFT Software Engineering Notes, 22(2), 84. https://doi.org/10.1145/251880.251992 

Etikan, I., Abubakar Musa, S., & Alkassim, S. R. (2017). Comparison of Convenience Sampling and Purposive Sampling Comparison of Convenience Sampling and Purposive Sampling. American Journal of Theoretical and Applied Statistics, 5(February). https://doi.org/10.11648/j.ajtas.20160501.11 

Foong, O.-M., Tran, Q.-T., Yong, S.-P., & Rais, H. M. (2014). Swarm inspired test case generation for online C++ programming assessment. 2014 International 

Conference on Computer and Information Sciences (ICCOINS), 1–5. https://doi.org/10.1109/ICCOINS.2014.6868842 

Frazer, M. (1992). Quality Assurance in Higher Education (1st Editio). 

Gao, J. Z., Tsao, H.-S. J., & Ye, W. (2003). Testing and Quality Assurance for Component-Based Software. (V. Perrish & L. Nevard, Eds.). Artech House Inc. 

Gerdes, A., Heeren, B., Jeuring, J., & van Binsbergen, L. T. (2017). Ask-Elle: an Adaptable Programming Tutor for Haskell Giving Automated Feedback. International Journal of Artificial Intelligence in Education, 27(1), 65–100. https://doi.org/10.1007/s40593-015-0080-x 

Ghosh, M., Verma, B., & Nguyen, A. (2002). An Automatic Assessment Marking And Plagiarism Detection. Proceedings of the First International Conference on Information Technology and Applications, 489–494. Retrieved from http://www.scopus.com/inward/record.url?eid=2-s2.0-1842580485&partnerID=40&md5=8f16cddf26ea2b1a2b78a6918f2c2c18 

GitHub Classroom. (2018). Retrieved November 11, 2018, from classroom.github.com 

Goševa-Popstojanova, K., & Trivedi, K. S. (2001). Architecture-based approach to reliability assessment of software systems. Performance Evaluation, 45(2–3), 179–204. https://doi.org/10.1016/S0166-5316(01)00034-7 

Guidelines: Malaysian Qualifictaion Statement (MQS). (n.d.). Retrieved June 4, 2021, from https://www2.mqa.gov.my/qad/garispanduan/GGP-Malaysia Qualification Statement.pdf 

Hollingsworth, J. (1960). Automatic graders for programming classes. Communications of the ACM, 3(1), 528–529. https://doi.org/10.1145/367415.367422 

Holzinger, A. (2005). Usability engineering methods for software developers. Communications of the ACM, 48(1), 71–74. https://doi.org/10.1145/1039539.1039541 

Howatt, J. W. (1994). On Criteria for Gradin g Student Programs. SIGCSE Bulletin, 26(3). 

Hsiao, I.-H. (2016). Mobile Grading Paper-Based Programming Exams: Automatic Semantic Partial Credit Assignment Approach. In G. Gerhard, J. Juris, & J. Van Leeuwen (Eds.), European Conference on Technology Enhanced Learning (pp. 110–223). Springer International Publishing Switzerland. https://doi.org/10.1007/978-3-319-45153-4_9 

Hwang, W., & Salvendy, G. (2010). Number of people required for usability evaluation: The 10±2 rule. Communications of the ACM, 53(5), 130–133. https://doi.org/10.1145/1735223.1735255 

IEEE Guide to Software Requirements Specifications. (1984). 

IEEE Recommended Practice for Software Design Descriptions. (1998) (Vol. 1998). 

Ihantola, P., & Seppälä, O. (2010). Review of Recent Systems for Automatic Assessment of Programming Assignments. In Proceedings of the 10th Koli Calling International Conference on Computing Education Research. https://doi.org/10.1145/1930464.1930480 

ISO 9241-11:2018. (2018). Retrieved June 1, 2021, from https://www.iso.org/standard/63500.html 

Iyyappan, M., & Kurmar, A. (2020). Software quality optimization of coupling and cohesion metric for CBSD model. In V. Singh, V. . Asarai, S. Kumar, & R. . Patel (Eds.), Computational Methods and Data Engineering (pp. 1–19). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-981-15-7907-3_1 

Jonsson, A., & Svingby, G. (2007). The use of scoring rubrics: Reliability, validity and educational consequences. Educational Research Review, 2(2), 130–144. https://doi.org/10.1016/j.edurev.2007.05.002 

Joy, M., Griffiths, N., & Boyatt, R. (2005). The boss online submission and assessment system. Journal on Educational Resources in Computing, 5(3), 2-es. https://doi.org/10.1145/1163405.1163407 

Jr, T. R. (2016). Nonexperimental Research: Strengths, Weaknesses and Issues of Precision. European Journal of Training and Development, 40(8/9), 676–690. https://doi.org/https://doi.org/10.1108/EJTD-07-2015-0058 

Juarez-Ramirez, R., Jimenez, S., Huertas, C., & Guerra-Garcia, C. (2017). Towards Assessing Attitudes and Values in the Practice of Software Engineering: The Competency-Based Learning Approach. 2017 5th International Conference in Software Engineering Research and Innovation (CONISOFT), 153–162. https://doi.org/10.1109/CONISOFT.2017.00026 

KENTON, W. (n.d.). Descriptive Statistics. Retrieved August 16, 2019, from https://www.investopedia.com/terms/d/descriptive_statistics.asp 

Krathwohl, D. R. (2002). A revision of bloom’s taxonomy: An overview. Theory into Practice, 41(4), 212–218. https://doi.org/10.1207/s15430421tip4104_2 

Krejcie, R. V, & Morgan, D. (1970). DETERMINING SAMPLE SIZE FOR RESEARCH ACTIVITIES. EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT, 607–610. 

Lajis, A., Baharudin, S. A., Kadir, D. A., Ralim, N. M., & Nasir, H. M. (2018). A Review of Techniques in Automatic Programming Assessment for Practical Skill Test. Journal of Telecommunication, Electronic and Computer Engineering, 10(2), 109–113. 

Lavrakas, P. J. (2008). Encyclopedia of Survey Research Methods. In J. Smarr (Ed.) (p. 1041). SAGE Publications, Inc. 

Leal,  jose paulo, & Fernamdo, S. (2010). Using Mooshak as a Competitive Learning Tool. A New Learning Paradigm: Competition Supported by Technology, 91–106. 

Leal, P., & Silva, F. (2003). Mooshak : a Web-based multi-site programming contest, 581(March), 567–581. https://doi.org/10.1002/spe.522 

Lee Chuan, C. (2006). SAMPLE SIZE ESTIMATION USING KREJCIE AND MORGAN AND COHEN STATISTICAL POWER ANALYSIS: A COMPARISON Chua Lee Chuan Jabatan Penyelidikan. Jurnal Penyelidikan IPBL. 

Leff, A., & Rayfield, J. T. (2001). Web-application development using the Model/View/Controller design pattern. Proceedings - 5th IEEE International Enterprise Distributed Object Computing Conference, 2001-Janua(January), 118–127. https://doi.org/10.1109/EDOC.2001.950428 

Lichter, H., Schneider-hufschmidt, M., & Zullighoven, H. (1995). Prototyping in Industrial Software Projects-Bridging the Gap Between Theory and Practice, 20(11), 825–832. 

Lipovaca, M. (2018). Introduction to Haskell. Retrieved December 25, 2018, from http://learnyouahaskell.com/introduction#so-whats-haskell 

Martin, J. (1991). Rapid Application Development. Macmillan Publishing Co. 

Masapanta-Carrión, S., & Velázquez-Iturbide, J. Á. (2018). A systematic review of the use of Bloom’s Taxonomy in computer science education. Proceedings of the 49h ACM Technical Symposium on Computer Science Education, 441–446. 

Mata-Toledo, R. A., & Cushman, P. K. (2003). Introduction To Computer Science (Tata-McGra). Tata McGraw-Hill Publishing Company Limited. 

Matera, M., Costabile, M. F., Garzotto, F., & Paolini, P. (2002). SUE inspection: An effective method for systematic usability evaluation of hypermedia. IEEE Transactions on Systems, Man, and Cybernetics Part A:Systems and Humans., 32(1), 93–103. https://doi.org/10.1109/3468.995532 

McGaghie, W. C. (2001). Review Criteria, 922–951. 

McTighe, J., & Arter, J. (2001). Scoring Rubrics in the Classroom: Using Performance Criteria for Assessing and Improving Student Performance (illustrate). Corwin Press. 

Moskal, B. M., & Leydens, J. A. (2000). Scoring rubric development: Validity and reliability. Practical Assessment, Research & Evaluation, 7(10), 1–10. https://doi.org/10.1016/j.asw.2010.01.003 

Murer, S., Gruntz, D., & Szyperski, C. (2002). Component Softwrae: Beyond Object-oriendted Programming (illustrate). ACM Press. 

Mustapha, A., Samsudin, N. A., Arbaiy, N., Mohamed, R., & Hamid, I. R. (2016). Generic assessment rubrics for computer programming courses. Turkish Online Journal of 

Educational Technology, 15(1), 53–61. https://doi.org/10.1017/CBO9781107415324.004 

MVC Framework - Introduction. (n.d.). Retrieved July 3, 2021, from https://www.tutorialspoint.com/mvc_framework/mvc_framework_introduction.htm 

Nielsen, J, & Landauer, J. (1993). A mathematical model of finding the usability problem. Proceedings of the CHI 93 proceedings of the Interact conference on human factors in computing systems. Proceedings of ACM INTERCHI’93 Conference, 206–213. Retrieved from http://delivery.acm.org/10.1145/170000/169166/p206-nielsen.pdf 

Nielsen, Jakob. (2000). Why You Only Need to Test with 5 Users. Retrieved May 26, 2021, from https://www.nngroup.com/articles/why-you-only-need-to-test-with-5-users/ 

Nielsen, Jakob. (2012). Usability 101: Introduction to Usability. Retrieved July 1, 2021, from https://www.nngroup.com/articles/usability-101-introduction-to-usability/ 

Odhabi, H. (2007). Investigating the impact of laptops on students’ learning using Bloom’s learning taxonomy. British Journal of Educational Technology, 38(6), 1126–1131. https://doi.org/10.1111/j.1467-8535.2007.00730.x 

Omair, A. (2014). Sample size estimation and sampling techniques for selecting a representative sample, 2(4), 142–147. https://doi.org/10.4103/1658-600X.142783 

Pandey, D., & Suman, U. (2010). An Effective Requirement Engineering Process Model for Software Development and Requirements Management. In International Conference on Advances in Recent Technologies in Communication and Computing. IEEE. https://doi.org/10.1109/ARTCom.2010.24 

Payne, D. A. (2003). Applied Educational Assessment (Second). 

Pieterse, V. (2013). Automated Assessment of Programming Assignments. In Proceedings of the 3rd Computer Science Education Research Conference on Computer Science Education Research (pp. 45–56). 

Pieterse, V., & Liebenberg, J. (2017). Automatic vs manual assessment of programming tasks. Proceedings of the 17th Koli Calling Conference on Computing Education Research  - Koli Calling ’17, 193–194. https://doi.org/10.1145/3141880.3141912 

Pillay, N. (2003). Developing Intelligent Programming Tutors for Novice Programmers, 35(2), 78–82. 

Popham, W. J. (1997). What’s wrong—and what’s right—with rubrics. Educational Leadership. Retrieved from http://skidmore.edu/assessment/handbook/Popham_1997_Whats-Wrong_and-Whats-Right_With-Rubrics.pdf 

Pressman, R. S. (2010). Software Engineering A Practitioner’s Approach (Seventh Ed). McGraw Hill. 

Price, P. C., Jhangiani, R., & Chiang, I.-C. A. (2015). Overview of nonexperimental research. Retrieved from https://ecampusontario.pressbooks.pub/researchmethods/chapter/overview-of-nonexperimental-research/ 

Prieto-Diaz, R., & Freeman, P. (1987). CLassifying Software for Reusability. Ieee Software, 6–16. https://doi.org/10.1109/MS.1987.229789 

Quantitative Data Analysis. (2018). Retrieved August 12, 2019, from https://research-methodology.net/research-methods/data-analysis/quantitative-data-analysis/ 

Reddy, Y. M., & Andrade, H. (2010). A review of rubric use in higher education. Assessment and Evaluation in Higher Education, 35(4), 435–448. https://doi.org/10.1080/02602930902862859 

Regan, G. O. (2019). Concise Guide to Software Testing. Springer. https://doi.org/https://doi.org/10.1007/978-3-030-28494-7 

Robinson, P. E., & Carroll, J. (2017). An online learning platform for teaching, learning, and assessment of programming. IEEE Global Engineering Education Conference, EDUCON, (April), 547–556. https://doi.org/10.1109/EDUCON.2017.7942900 

Romli, R., Abdurahim, E. A., Mahmod, M., & Omar, M. (2016). Current Practices of Dynamic-Structural Testing in Programming Assessments. Journal of Telecommunication, Electronic and Computer Engineering, 8(2), 153–159. 

Romli, R., Sulaiman, S., & Zamli, K. Z. (2013). Designing a Test Set for Structural Testing in Automatic Programming Assessment, 5(3). 

Romli, R., Sulaiman, S., & Zamli, K. Z. (2015a). Improving Automated Programming Assessments: User Experience Evaluation Using FaSt-generator. Procedia Computer Science, 72, 186–193. https://doi.org/10.1016/j.procs.2015.12.120 

Romli, R., Sulaiman, S., & Zamli, K. Z. (2015b). Improving the Reliability And Validity Of Test Data Adequacy In Programming Assessments. Jurnal Teknologi (Sciences & Engineering). 

Rubio-sánchez, M., Kinnunen, P., Pareja-flores, C., & Velázquez-iturbide, Á. (2014). Student perception and usage of an automated programming assessment tool. Computers in Human Behavior, 31, 453–460. https://doi.org/10.1016/j.chb.2013.04.001 

Salant, P., & Dillman, D. A. (1994). How to conduct your own survey. New York: John Wiley & Sons, Inc. 

Salkind, N. J. (2010). Encyclopedia of Research Design (Volume 1). 

Schlarb, M., Hundt, C., & Schmidt, B. (2015). SAUCE: A Web-Based Automated Assessment Tool for Teaching Parallel Programming. In G. Goos, J. Hartmanis, & J. Van Leeuwen (Eds.), European Conference on Parallel Processing (pp. 54–65). 

Springer International Publishing Switzerland. https://doi.org/10.1007/978-3-319-27308-2_5 

Shermis, M. D., & Vesta, F. D. J. (2011). Classroom Assessment In Action (p. 559). Rowman & Littlefi eld Publishers, Inc. 

Simpson, E. J. (1972). The Classification of Educational Objectives in the Psychomotor Domain. Education, 3(3), 43–56. Retrieved from http://eric.ed.gov/ERICWebPortal/recordDetail?accno=ED010368 

Singh, A., Karayev, S., Gutowski, K., & Abbeel, P. (2017). Gradescope: A Fast, Flexible, and Fair System for Scalable Assessment of Handwritten Work. Proceedings of the Fourth (2017) ACM Conference on Learning @ Scale - L@S ’17, 81–88. https://doi.org/10.1145/3051457.3051466 

Software Quality Attributes. (n.d.). Retrieved January 7, 2021, from https://asq.org/quality-resources/software-quality 

Solms, F., & Pieterse, V. (2016). Towards a Generic DSL for Automated Marking Systems. In Annual Conference of the Southern African Computer Lecturers’ Association (p. 642). Springer Verlag. https://doi.org/10.1007/978-3-319-47680-3_6 

Somerville, I. (2011). Software Engineering. (M. Horton, M. Hirsch, M. Goldstein, C. Bell, & J. Holcomb, Eds.). Pearson. 

Souza, D. M., Felizardo, K. R., & Barbosa, E. F. (2016a). A Systematic Literature Review of Assessment Tools for Programming Assignments. 2016 IEEE 29th International Conference on Software Engineering Education and Training (CSEET), 147–156. https://doi.org/10.1109/CSEET.2016.48 

Souza, D. M., Felizardo, K. R., & Barbosa, E. F. (2016b). A Systematic Literature Review of Assessment Tools For Programming Assignments. In Proceedings - 2016 IEEE 29th Conference on Software Engineering Education and Training (pp. 147–156). https://doi.org/10.1109/CSEET.2016.48 

Srikant, S., & Aggarwal, V. (2014). A system to grade computer programming skills using machine learning. Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD ’14, 1887–1896. https://doi.org/10.1145/2623330.2623377 

Steigerwald, L. R. (1992). Rapid Software Protyping. 

Stevens, D. D., & Levi, A. J. (2005). Introductions to Rubrics: An Assessment Tool to Save Grading Time, Convey Effective Feedback, and Promote Student Learning. Stylus Publishing, LLC. 

Stout, Q. F. (2000). What is Parallel Computing? A Not Too Serious Explanation. Retrieved December 25, 2018, from https://web.eecs.umich.edu/~qstout/parallel.html 

Summative and Formative Assessment. (2018). Retrieved July 21, 2018, from https://citl.indiana.edu/teaching-resources/assessing-student-learning/summative-formative/ 

Tang, T., Smith, R., Rixner, S., & Warren, J. (2016). Data-Driven Test Case Generation for Automated Programming Assessment. In Annual Conference on Innovation and Technology in Computer Science Education (pp. 260–265). https://doi.org/10.1145/2899415.2899423 

Taras, M. (2005). Assessment - Summative and formative - Some theoretical reflections. British Journal of Educational Studies, 53(4), 466–478. https://doi.org/10.1111/j.1467-8527.2005.00307.x 

Tulis, T., & Willian, A. (2013). Measuring the User Experience: Collecting, Analysis, and Presenting Usability Metrics. (M. Dunkerley & H. Scherer, Eds.) (Second Edi). Morgan Kaufmann. 

Ullah, Z., Lajis, A., Jamjoom, M., Altalhi, A. H., Shah, J., & Saleem, F. (2019). A rule-based method for cognitive competency assessment in computer programming using bloom’s taxonomy. IEEE Access, 7, 64663–64675. https://doi.org/10.1109/ACCESS.2019.2916979 

Vale, T., Crnkovic, I., De Almeida, E. S., Silveira Neto, P. A. D. M., Cavalcanti, Y. C., & Meira, S. R. D. L. (2016). Twenty-eight years of component-based software engineering. Journal of Systems and Software, 111, 128–148. https://doi.org/10.1016/j.jss.2015.09.019 

Virzi, R. A. (1992). Refining the test phase of usability evaluation: How many subjects is enough? Human Factors, 34(4), 457–468. https://doi.org/10.1177/001872089203400407 

Wiggins, G. P. (1998). Educative assessment: Designing assessments to inform and improve student performance. 

Wolf, K., & Stevens, E. (2007). The role of rubrics in advancing and assessing student learning. The Journal of Effective Teaching, 7(1), 3–14. Retrieved from http://works.bepress.com/cgi/viewcontent.cgi?article=1058&context=susan_madsen#page=8 

Yew-Jin, L., Mijung, K., Qingna, J., Hye-Gyoung, Y., & Kenji, M. (2017). East-Asian Primary Science Curricula: An Overview Using Revised Bloom’s Taxonomy. SpringerBriefs in Education. https://doi.org/10.1007/978-981-10-2690-4 

Yu, Y. T., Poon, C. K., & Choy, M. (2006). Experiences with PASS : Developing and Using a Programming Assignment aSsessment System *. 

Zougari, S., Tanana, M., & Lyhyaoui, A. (2016). Towards an automatic assessment system in introductory programming courses. Proceedings of 2016 International Conference on Electrical and Information Technologies, ICEIT 2016, 496–499. https://doi.org/10.1109/EITech.2016.7519649 

Zougari, S., Tanana, M., & Lyhyaoui, A. (2017). Hybrid assessment method for programming assignments. Colloquium in Information Science and Technology, CIST, 564–569. https://doi.org/10.1109/CIST.2016.7805112 

 


This material may be protected under Copyright Act which governs the making of photocopies or reproductions of copyrighted materials.
You may use the digitized material for private study, scholarship, or research.

Back to previous page

Installed and configured by Bahagian Automasi, Perpustakaan Tuanku Bainun, Universiti Pendidikan Sultan Idris
If you have enquiries with this repository, kindly contact us at pustakasys@upsi.edu.my or Whatsapp +60163630263 (Office hours only)