Next Article in Journal
Intelligent Medical Velostat Pressure Sensor Mat Based on Artificial Neural Network and Arduino Embedded System
Previous Article in Journal
Business Impact Analysis of AMM Data: A Case Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automatic Recommendation of Forum Threads and Reinforcement Activities in a Data Structure and Programming Course

by
Laura Plaza
*,†,
Lourdes Araujo
,
Fernando López-Ostenero
and
Juan Martínez-Romo
Department of Information Languages and Systems, Universidad Nacional de Educación a Distancia, 28040 Madrid, Spain
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Syst. Innov. 2023, 6(5), 83; https://doi.org/10.3390/asi6050083
Submission received: 30 July 2023 / Revised: 15 September 2023 / Accepted: 19 September 2023 / Published: 21 September 2023

Abstract

:
Online learning is quickly becoming a popular choice instead of traditional education. One of its key advantages lies in the flexibility it offers, allowing individuals to tailor their learning experiences to their unique schedules and commitments. Moreover, online learning enhances accessibility to education, breaking down geographical and economical boundaries. In this study, we propose the use of advanced natural language processing techniques to design and implement a recommender that supports e-learning students by tailoring materials and reinforcement activities to students’ needs. When a student posts a query in the course forum, our recommender system provides links to other discussion threads where related questions have been raised and additional activities to reinforce the study of topics that have been challenging. We have developed a content-based recommender that utilizes an algorithm capable of extracting key phrases, terms, and embeddings that describe the concepts in the student query and those present in other conversations and reinforcement activities with high precision. The recommender considers the similarity of the concepts extracted from the query and those covered in the course discussion forum and the exercise database to recommend the most relevant content for the student. Our results indicate that we can recommend both posts and activities with high precision (above 80%) using key phrases to represent the textual content. The primary contributions of this research are three. Firstly, it centers on a remarkably specialized and novel domain; secondly, it introduces an effective recommendation approach exclusively guided by the student’s query. Thirdly, the recommendations not only provide answers to immediate questions, but also encourage further learning through the recommendation of supplementary activities.

1. Introduction

Online learning is rapidly becoming a popular alternative to traditional education. It represents a new paradigm for distance learning that is revolutionizing education at all levels. While online learning was already important in higher education, the COVID-19 pandemic has resulted in a surge in its adoption, with even primary students attending online lessons [1]. Investment in global education technologies reached USD 1866 billion in 2019, and the overall market for online education is projected to reach USD 319.167 billion by 2025 [2]. As children become more accustomed to using technology at a younger age, it is expected that higher education will largely take place online in the coming decades. Therefore, institutions must adapt their teaching methods to meet the needs of digital students.
Online learning offers numerous advantages. According to [3], e-learning allows for asynchronous learning, giving each learner the freedom to study at their own pace and speed. This not only increases satisfaction, but also reduces stress. Additionally, it allows studying from anywhere, with resources always available and accessible at any time. Online learning can also help achieve equity and inclusion in education as students can attend international institutions without having to move abroad, significantly reducing living expenses. For those with physical disabilities, online learning avoids the need to travel to the physical location of the classes [4]. Furthermore, some authors argue that online learning is more environmentally friendly and respectful of the planet. Pedagogically, research suggests that online learning increases information retention [5]. It also engages students more actively in learning, promoting motivation and deep learning [6,7], and allowing for personalized learning by providing individualized and differentiated instruction.
However, online students often face many challenges [8]. Firstly, technology can be expensive and requires maintenance and updates, which may limit access for students from disadvantaged socio-economic backgrounds. Additionally, studies suggest that students need to spend more time completing a course in an online classroom compared to a physical one as they cannot directly contact instructors when issues arise. Addressing the need for student–teacher interaction, which is known to improve learning outcomes, is a significant challenge in asynchronous online learning environments. Secondly, it can be difficult for teachers to identify the specific needs of their students in e-learning environments [1] due to lower levels of interaction compared to traditional settings. In this scenario, the discussion forum becomes the primary tool for encouraging dialogue and interactions. Students often use forums to pose questions to instructors and peers, and to initiate debates or discussions on topics of interest. Thirdly, efficiently and fairly assessing learning in digital classrooms remains an open issue.
E-learning systems offer many opportunities for personalization, allowing for a level of adaptation in teaching that is difficult to achieve in traditional classrooms. As a result, many works have developed different strategies and tools to enable the personalization of learning experiences to meet the individual needs of different learners. In recent years, educational recommendation systems have been proposed as an effective way to facilitate personalized learning. They seek to provide accurate information to students based on their preferences, user profile, and learning objectives [9]. In addition to recommendation systems, forums are important tools in online learning environments. They offer students the opportunity to learn from each other and think through concepts on their own, with guidance from the instructor. Furthermore, since online students often feel alone and isolated in their studies, forums can help prevent this sense of isolation. Students can provide feedback and suggestions on their peers’ work, enabling others to benefit from their experience and knowledge [10].
In this work, we propose to integrate recommendation technologies and discussion forums to personalize learning in online courses. A recent study by Irish et al. [11] showcased the positive impact of post recommendations on online graduate courses, leading to enhanced student engagement and performance. This promising outcome has motivated our interest in further exploring effective recommendation strategies for e-learning forums. Specifically, we suggest the use of a recommender that, when a student posts a query in a discussion forum, suggests (i) other discussion threads where similar or related questions have been raised; and (ii) reinforcement activities that can enhance understanding of the most relevant concepts addressed in the question. We hypothesize that recommending reinforcement materials to expand and strengthen concepts that are posing difficulties will greatly assist students and result in improved learning outcomes, while also reducing the workload of the teaching team by decreasing the need to answer frequently repeated questions in the forum. We evaluate our recommender on an online course about data structures and algorithms, achieving an approximately 80% precision rate in recommending both similar discussion threads and related activities.
In summary, the research presented in this paper introduces several noteworthy novelties:
  • Domain-specific application: The study delves into a highly specific and challenging domain: the recommendation of conversations in forums focused on data structures and algorithms. In this domain, students pose questions seeking precise and constrained solutions to intricate problems.
  • Query-driven recommendation: A distinctive feature is the recommendation solely based on the student’s query. This enables the recommender to function effectively without needing prior information about the student’s performance or interests.
  • Holistic recommendations: The recommender yields promising results both in the recommendation of conversations and activities from exams. This not only provides answers to the questions posed by the students, but also offers them supplementary learning activities.
  • Technical innovation: The study introduces technical innovation through the application of transformers to process texts related to data structures and algorithmic concepts. Additionally, it adapts the key phrase definition presented in [12] to our specific domain, marking a novel approach.

2. Related Work

Recommender systems are used in a wide variety of applications, such as recommending books, music, movies, or news. In general, a recommender system is a software designed to recommend things to the user based on her preferences and needs [13].
There are two broad types of recommender systems [14]: collaborative filtering and content-based. Collaborative recommenders use the opinions and choices of users with similar profiles, along with the information about the user’s past behavior, to predict the items the user will be interested in. The main disadvantage of this approach is the so-called cold-start problem that emerges when new items are added to the catalog and have either none or very few interactions. In contrast, content-based recommenders make use of a list of attributes or features that describe the items in the catalog to recommend items that are similar to others that were liked or chosen by the user in the past. This approach has the main advantage of user independence, so that it does not need the existence of a network of users and their ratings to infer the recommendations, but it has the main problem of over-specialization and provides a limit degree of novelty. A third category of recommenders follows a hybrid approach, which combines methods from both collaborative and content-based systems.
Recommenders are usually employed in e-learning environments to support personalized learning by recommending resources and learning activities according to the student’s preferences, motivation and needs. They may help access the huge amount of learning resources that are available in the different repositories, so that only relevant information is presented to the student. An important number of recommender systems have been deployed in the last decade to serve in e-learning settings. Although the design principles are similar to those of other recommenders (e.g., music recommenders) the information retrieval goals are often different. The most popular application of recommenders in education aims to help students in the selection of their courses [15,16,17,18,19]. Another important category of recommenders is directed to propose reinforcement activities to students that experience any type of difficulty, or to recommend extension materials for further study [20,21,22]. Finally, other works attempt to suggest instructors or course designers’ materials for improving the course [23,24].
In recent times, the utilization of recommendation systems in online forums has garnered considerable interest. Singh et al. [25] propose the recommendation of threads similar to a given one, rather than solely relying on the user’s query. To achieve this, they conceptualize threads as a collection of posts and establish similarity using diverse strategies grounded in basic lexical matching. While this approach bears merit, it doesn’t fully address our particular scenario, wherein the challenge is to offer assistance to students prior to receiving a response from either the instructor or fellow students.
Duan and Zhai [26] leverage the thread’s structure to smooth the language model of a post, utilizing the context of the thread containing it, addressing issues such as posts containing incomplete information due to the assumption that prior posts have already been read.
Papadimitriou et al. [27] present an approach to identify forum posts that are relevant to a specific post. They achieve this by segmenting the text into sections and then calculating the similarity between these segments. A key aspect of their work is the introduction of a segmentation method based on the concept of “communication mean”, which ensures coherence and accurate segmentation of passages. The authors assess their methodology using hotel review and travel review forums. It is noteworthy to highlight that while their study focuses on the recommendation of opinions, our current challenge involves a more complex scenario. In our case, the recommended posts must provide concrete solutions to specific problems.
The study conducted by Pattabiraman et al. [28] centers on the automated clustering of threads within a Linux forum. This intermediate task can be employed to facilitate the recommendation of similar threads and to enhance retrieval engines. Their findings showcase the efficacy of a parabolic weighting method, which assigns higher weights to both the initial and concluding posts of a thread, surpassing the performance of a standard clustering approach.
Li et al. [29] focus on online health communities and propose a system designed to suggest pertinent discussion threads to users within these communities. Their approach involves harnessing the Latent Dirichlet Allocation model to distill the topic dimension and employing a Convolutional Neural Network to encode the concept dimension. A thread neural network is then constructed to capture thread characteristics, while a user neural network captures user interests.
Lately, a substantial body of research has been dedicated to content recommendation in MOOCs. Lan et al. [30] introduce a novel approach that employs a probabilistic model. This model integrates topic modeling of the post content, timescale modeling of the diminishing excitement of posts over time, and the modeling of learner topic interests. The aim is to offer tailored recommendations of discussion threads to MOOC students based on their specific requirements.
Zhu et al. [31] explore the recommendation of video clips that align with MOOC discussion forum entries. Their approach computes the textual similarity between the transcripts of video clips and the text within the discussion entries. To rank the video clips in response to a forum, they employ transformers, a type of neural network architecture renowned for its effectiveness in various natural language processing tasks.
Irish et al. [11] recently demonstrated that the recommendation of posts in online graduate course improves the student participation and performance, while helping staff to manage the increasingly larger forums [32].
Although previous works have proposed the automatic recommendation of contents and activities in e-learning settings, specially in MOOCs, such works build their recommendations on the information from the student profiles or on the similarity between discussion threads. Our content-based recommender does not make use of any previous information about the student, and therefore, it does not face the cold start problem. In contrast, it requires the students to participate in the course forum and explicitly state her information need.

3. Material and Method

We propose using advanced natural language processing techniques to design and implement a recommender that provides additional material and activities to reinforce the study of topics that students find difficult. Our case study focuses on the course of Data Structures and Algorithms, which is taught in two degrees of Computer Science at the Universidad Nacional de Educación a Distancia (UNED), the largest distance learning university in Spain. The course includes advanced data structures and strategies for algorithm design and schemes, and is part of the second year of the degrees. The data structures include hash tables, and also graphs and heaps, which are later used in the implementation of some algorithmic schemes. The algorithmic schemes included are greedy, divide and conquer, dynamic programming, backtracking, and branch and bound. Each data structure may include other relevant associated topics. In some cases, these topics are related to implementation aspects (adjacency matrices and adjacency lists associated to graphs). They may be related to properties and applications, such as spanning trees, connected components, and articulation points, also related to graphs. The presentation of each scheme is illustrated with some well-known algorithms. Thus, Prim and Dijkstra algorithms are studied in the greedy scheme, quicksort and mergesort in divide and conquer, and the Floyd algorithm in dynamic programming.
The Data Structures and Algorithms course has a virtual learning environment where students access study resources, including theoretical content, exercises, and course scheduling. The course also features a discussion forum where students can raise questions related to course content for resolution by fellow students or instructors. The discussion forum is actively moderated by teaching assistants, ensuring that communication remains respectful, informative, pertinent, and accurate. Whenever a student provides an incorrect response or when queries go unanswered, the teaching assistants promptly step in to provide accurate information.
The problem is stated as follows: given a query posed by a student in the discussion forum of the course, concerning a given learning concept, the recommender will be able to make two types of recommendations: first, the recommender will point out to other discussion threads where students have posed similar questions; and second, the recommender will propose activities from a repository of past tests to practice and improve understanding of such concept and other similar concepts that help the student in her final goal of succeeding in the final test. For example, if a student posts a question about the Hamiltonian paths problem and its solution using the backtracking scheme, our recommender system may suggest other discussion threads about the Hamiltonian paths problem, as well as exercises from the past tests database that cover this problem or other similar problems that are solved using the same scheme, such as the graph coloring problem. By leveraging the power of natural language processing and past data, our recommender system is expected to improve students’ learning outcomes and reduce the workload of the teaching team by automating the process of answering frequently repeated questions.
Figure 1 illustrates the recommendation process, which consists in three main steps: first, the query poses by the student in the forum is pre-processed, as well as the repository of past test activities and all posts from past conversations in the discussion forum. Next, the similarity between the query and the different activities and posts is calculated. Finally, activities and posts are ranked according to their similarity to the query. Each step is explained in detail below.
  • Step 1: Text Pre-Processing and Key Phrases Extraction
First, the posts in the discussion forum are downloaded in .txt format and are processed to extract the relevant information: the title, the main text and the key phrases. The same processing is performed on the student’s query. The posts and the query present the following structure:
Message no. <X>
Sent by: <sender name> on <date>
Title: <title text>
-
<main text>
The title and main text of the post were extracted directly from the query. First, all words were converted to lower case and accents were removed. Then, non-essential words such as determiners and prepositions were eliminated since they do not carry semantic meaning. Next, key phrase extraction was performed using natural language techniques.
Key phrases represent a way of transforming unstructured information associated with the content of texts into structured information that summarizes such content. Therefore, they are very useful in information search and similarity analysis processes. Here we have applied elementary unsupervised information extraction techniques to extract these key phrases. For this, we have defined a regular expression that corresponds to the most common forms of noun and prepositional phrases in Spanish. These types of phrases are usually used to describe the concepts mentioned in the texts. Specifically, this regular expression, which has been used previously in other works [12], takes the following form:
( NEG ? JJ * ( NN . * ) + JJ * IN ) ? JJ * ( NN . * ) + JJ *
In this expression, “NEG” represents a negation trigger such as “no”, “neither” or “without”, “JJ” represents an adjective, “NN” represents a noun and “IN” a preposition. The first part of the expression, up to the preposition tag IN, represents key phrases that begin with a prepositional phrase, as in [(‘algoritmo’, ‘NN’), (‘de’, ‘IN’), (‘busqueda’, ‘NN’)] (algorithm of search). If this part does not appear, the expression shall consist of a noun phrase only, as in [(‘costes’, ‘NN’), (‘teoricos’, ‘JJ’)] (theoretical costs). The initial tag NEG represents negated concepts as in [(‘sin’, ‘NEG’), (‘coste’, ‘NN’), (‘adicional’, ‘JJ’)] (without additional cost).
The same pre-processing is performed over the different questions in the repository of past tests: the text is converted to lower case, accents are removed, empty and very frequent words are eliminated and key phrases are extracted.
  • Step 2: Text Representation and Similarity Calculation
We use three different approaches to compute similarity between the student’s query and the questions and posts: using traditional bags of words, using key phrases and using word embeddings.
  • Computing similarity using bags of words. We calculate the similarity between the student’s query and (1) all the questions in the test repository and (2) all the posts in the forums. We represent pre-processed texts as bags of words, which model the text as a multiset (bag) of its words, ignoring grammar and word order. We then calculate the Jaccard coefficient between the bags of words that represent each pair of texts. We have implemented and tested various similarity measures, including LIN and cosine distance. However, we have chosen to present the results obtained using the Jaccard coefficient as they have demonstrated the most favorable outcomes). The Jaccard coefficient is a measure of similarity between finite sample sets, defined as the size of the intersection divided by the size of the union of the sample sets:
    j a c c a r d ( x , y ) = x y x y = x y x + y x y
    In addition, we compute the similarity between the titles and the bodies (main text) for each post and query. Since the titles often contain the most important elements of the body section, we assign double weight to the similarity between titles.
  • Computing similarity using key phrases. To represent the texts, we use bags of key phrases that were extracted as explained in step 1. Using these representations, we compute the similarity between the student’s query and the test questions and posts using the Jaccard coefficient. Additionally, we compute the similarity between the titles and the main bodies of each post and query. Since titles usually contain the most important information from the body section, we assign double weight to the similarity between titles.
  • Computing similarity using word embeddings. We first convert the texts into embeddings and then calculate the cosine distance between the embeddings of each pair of posts. This allows us to map posts with similar meanings close together in vector space. To achieve this, we use the Sentence Transformers (ST) framework [33], which leverages a pre-trained BERT model to obtain the contextual representation of the posts. ST applies a mean pooling method to the output, which converts token embeddings to sentence embeddings of a fixed size. By default, this method averages the output embeddings.
    The Sentence Transformers framework provides a set of pre-trained models for various functionalities such as semantic search, semantic similarity, question answering, clustering, image and text, etc. Moreover, these models are generally available in English and only some of them have a multilingual version. In this work, we have selected the best pre-trained model for semantic similarity and which has a multilingual version since the analyzed texts are in Spanish. (The performance of the models available for this framework is available at https://www.sbert.net/docs/pretrained_models.html (accessed on 18 September 2023)).
    Some pre-trained models have several similarity functions such as dot-product or cosine or Euclidean distances. In particular, the model used has only one similarity function available which is the cosine distance. However, this function is the one that works best since this framework also has a loss function specially designed for the cosine distance (CosineSimilarityLoss).
    To compare each post and the student’s query, we obtain embeddings for the title and the body separately. We also assign double weight to the similarity between titles, as they often contain the most important information.
    We utilized a BERT-based model to generate embeddings for both the title and the body of each post. BERT and other transformer networks produce embeddings for each token in the input text. To create a fixed-sized sentence embedding, the model applies mean pooling, which involves averaging the output embeddings for all tokens to yield a fixed-size vector.
    We used the multilingual model “paraphrase-multilingual-mpnet-base-v2” [33], which has been trained on parallel data for over 50 languages, including Spanish. The model is capable of generating aligned vector spaces, meaning that similar inputs in different languages are mapped closely in a vector space, without requiring explicit specification of the input language.
    This model maps sentences and paragraphs to a dense vector space of 768 dimensions, and can be employed for tasks such as text similarity, clustering, and semantic search.
  • Step 3: Ranking and Presentation
The posts and questions are sorted based on their similarity to the student’s query, resulting in two separate rankings—one for posts and the other for questions. Within each ranking, the posts or questions are ordered in descending order of similarity to the query. The top-N posts/questions are then presented to the student.
When ranking both posts and questions, we use the the three different approaches to compute the similarity presented in step 2: traditional bags of words, key phrases and word embeddings. The three different approaches are evaluated and the results are presented and compared in Section 5. Moreover, an illustrative example of a real query along with the recommendations suggested by our system is provided in Appendix A.

4. Evaluation Methodology

The next section describes the data, experiments and metrics employed to evaluate the adequacy of our approach and the relevance of the recommendations made by our system.

4.1. Dataset

Our recommender system has been evaluated using data collected over several years in a subject related to algorithms and advanced data structures (see previous section). The data used in our experiments consist of two separate datasets. The first dataset contains conversations among students in the open discussion forum of the subject. In this forum, online students mostly ask questions about different topics related to the subject, which are then discussed with other students and instructors. Specifically, our forum dataset contains 666 conversations spanning nine academic years (2011–2020), with each conversation consisting of several posts. In total, our dataset comprises 3198 posts. As for the test dataset, we randomly selected a set of 100 queries posed by students in the discussion forum. An example of such a query, translated from Spanish to English, is
Title: Quadratic probing in hash tables (page 52).
Body: Hi. I’m not sure I have understood the quadratic probing. For example, if all
the coefficients ‘ck’ are equal to 1, that means that if there is a collision then
it searches the next position in the table, if the collision persists 4 more
positions ahead, if it persists 9 more positions, ...
Am I right? Greetings.
The second dataset is composed by 156 multiple choice questions from 26 different tests passed by students from 2012 to 2020. An example of a question is shown below.
Regarding hash tables, it is true that:
  (a) In the closed hashing method with double hashing, the functions h and h’
  that are applied can be the same when the number of elements in the table,
  m, meets certain conditions.
  (b) In collision resolution, open hashing is always more efficient than closed
  hashing.
  (c) In the double hashing, the function h’(x) must necessarily satisfy that
  h’(x) <> 0.
  (d) In collision resolution, if the load factor is 1 it can be solved using
  closed hashing.

4.2. Evaluation Methodology

Since the data used in our experimentation come from a real case study, their quantity is limited, which in turn restricts the range of applicable techniques. Nevertheless, for comparative purposes, we have integrated three distinct approaches into our analysis. These approaches encompass classical Natural Language Processing (NLP) techniques, including those based on bag-of-words and key phrases, as well as contemporary methods that leverage implicit knowledge, such as embeddings. We believe that these methodologies exemplify the range of text representation methods to be employed for similarity calculation. Therefore, we compute similarity between the query and the posts/questions using different strategies: (1) similarity between bags of words, (2) similarity between word phrases and (3) similarity between word embeddings.
As the evaluation metric, we have used precision at k (P@k), which is a typical information retrieval metric defined as the number of relevant documents among the k first documents in the retrieval ranking. We set k to 5. Evaluation is performed manually by four experts in data structures and algorithms, and includes both the evaluation of the recommendations of reinforcement activities from previous tests and the evaluation of the recommendations of related posts from the course discussion forum.
We evaluate the recommendations using two different criteria for defining when a post or activity is considered as relevant for a given query:
  • The strict evaluation criterion or problem-related recommendation: a question from a test is relevant to the student query if it refers to the same concept or problem solved or it uses the same data structure or scheme. This is a very restrictive criterion that considers only questions strongly related to the query.
  • The relaxed evaluation criterion or scheme-related recommendation: it allows the recommendation of questions concerning the same data structure or scheme but a different problem, since they may also be of interest to the student to reinforce learning. For example, if the student is asking about the resolution of the backpack problem using the greedy scheme, the recommender may suggest activities regarding other problems solved using the greedy strategy (e.g., Dijkstra, or the exchange problem).

5. Results and Discussion

This section summarizes the results of the evaluation of the different recommendation strategies. We first present the results concerning the recommendation of related discussion threads and next, the results of the recommendation of reinforcement activities.

5.1. Recommendation of Discussion Threads

As outlined in the Method section, we conducted several preliminary experiments to determine the most useful information from the queries and posts/activities for our recommendation system. We explored the following approaches: (1) calculating the similarity between the title of the query and the body of each post; (2) calculating the similarity between the body of the query and the body of each post; and (3) weighted similarity between the titles and bodies of the query and the post, with double weight given to titles. Our experiments revealed that the weighted similarity between the titles and bodies produced the best results. Thus, we used this approach in our subsequent experiments. Additionally, we compared three different textual representations: bags of words, bags of key phrases, and word embeddings.
Table 1 summarizes the evaluation results for the recommendation of similar posts. As expected for all the three approaches, the results of the relaxed evaluation are better than those of the strict evaluation. The best results are obtained when the text is represented as a set of key phrases and the similarity between the query and the post is computed using Jaccard similarity. When using the “relaxed criterion” (i.e., when considering that all posts concerning the same algorithmic scheme than the problem stated in the student query are relevant for recommendation), the precision at 5 is above 80%, which means that more than 4 out of the 5 items recommended are relevant to the query. When using the “strict criterion” (i.e., only the posts concerning the same problem and algorithmic scheme than the problem stated in the student query are relevant for recommendation), the precision at 5 decreases to 60%, which means that 3 out of the 5 items recommended are relevant to the query. When the text is represented using bags of words or embeddings, the results are close, but the word embeddings behave better than the bags of words. In the case of the relaxed evaluation strategy, the results of the bags of words and word embeddings approaches are significantly worse than those obtained for the key phrases representation.
Our results seem to indicate that, in our particular scenario, key phrases convey the topical content of the text better than individual words. On the one hand, since the names of the different computational problems and algorithmic schemes are not individual words but noun phrases (e.g., “divide and conquer”, “branch and bound”, “binary search”, the skyline problem”, to name a few), this was expected. In contrast, words such as “search”, “vector”, “graph”, and “heap” are commonly used when describing different problems and schemes, which may lead to confusion. On the other hand, word embeddings are able to capture contextual information and so produce better results than using words.
Figure 2, Figure 3 and Figure 4 show, in a graphical manner, the evaluation results for the recommendation of similar posts, when the texts are represented as bags of words, key phrases and word embeddings, respectively. We can observe that, when the evaluation is performed under the “strict” criterion, the differences between the three approaches are not noticeable. In contrast, in the case of the “relaxed” evaluation, the use of the bag of words approach produces poorer results than the other two approaches.
We first analyze the results for the “strict” evaluation (i.e., we only consider as relevant to a query the posts that address the same problem solved using the same scheme), which are represented by the dashed line in the graphics. First, we see that, for all different textual representations, there are a number of queries for which no relevant posts are recommended (see blue line in Figure 2, Figure 3 and Figure 4). These queries refer to problems that are underrepresented in the database, with only one or two conversations dealing with them. This is, therefore, a limitation of the dataset that will be progressively overcome as the subject forum grows. Indeed, this is a limitation that extends, in general, to the entire evaluation: there are not enough posts for an important number of topics. In particular, we found that the number of queries without relevant recommendations is higher for the word embeddings approach (8 queries) than for the bags of words and bags of key phrases approaches (3 queries).
We next analyze the results for the “relaxed” evaluation (i.e., we consider as relevant to a query all the post that address either the same problem or the same algorithmic scheme), which are represented by the solid line in Figure 2, Figure 3 and Figure 4. We can see that, for the bags of words representation, 14 posts obtain 5 relevant recommendations. In the case of the key phrases representation, this number raises to 45, while in the case of the word embeddings, the number reaches 38 posts. If we consider the number of posts for which we obtain more than 80% of relevant recommendations (i.e., 4 out of the 5 posts recommended are relevant), we find that, for the bags of words representation, posts receive 23 relevant recommendations. In the case of the key phrases representation, this number raises to 73, while in the case of the word embeddings, the number reaches 56 posts. This seems to indicate that, especially for the case of the key phrases and word embedding approaches, most of the recommendations made to the students are being of help. However, in the case of the word embedding approach, we find that the number of recommendations where precision is under 0,4 is higher than expected (13%), while in the case of the key phrases approach it is 0%.
A problem that we have found and that affects to all three textual representations is the presence of posts and queries in which neither the name of the problem nor the name of the scheme is mentioned, only the page of the book in which the problem the student refers to appears. This makes it difficult for the algorithm to infer what the topic of the problem is, and to recommend relevant threads. This may be seen, for example, in the following query:
Exercise 7.3: I do not quite understand what  is being asked for in this exercise.
Shouldn’t  it always be the same area  that is trimmed from the board?
i.e., the sum of the different products x*y.
We have also found a poor performance for the queries that ask about problems that can be solved using different schemes, which means that, frequently, when a student asks about the solution to a problem using a given scheme, the algorithm recommends posts about the same problem but solved using a different scheme, which is not exactly what the student is asking for. This is the case, for example, of the packing problem, that may be solved using various schemes studied in the course, such as greedy and dynamic programming.
Finally, orthographic errors and typos, which are quite frequent in the course forum, also pose a problem for our recommendation algorithm.

5.2. Recommendation of Reinforcement Activities

We next evaluate the recommendation of reinforcement activities. Given a query made by a student in the discussion forum that reflects a doubt or difficulty in the resolution of a problem or in the understanding of a concept, the recommender proposes to the student one or more exercises/activities from tests of past academic years that allow them to practice on the same and similar concepts than the one that is being addressed in their query.
For this second set of experiments, we have selected a subset of 20 queries from the test set because most of the topics that are dealt with in the forum are not present in the exam dataset. Again, we only show the results obtained for the strategy that weights the similarity between the text of the activities and the title and body sections of the query. We also compare the three textual representations: bags of words, bags of key phrases and word embeddings.
Table 2 summarizes the evaluation results for the recommendation of reinforcement activities. Moreover, Figure 5, Figure 6 and Figure 7 show the detailed results. As in the case of the recommendation of posts, the result of the relaxed evaluation are better than those of the strict evaluation. The best results are obtained when the text is represented as a set of key phrases. When using the “relaxed criterion” the precision at 5 is above 80%. When using the “strict criterion”, the precision at 5 is 65%. For the bags of words approach, precision at 5 is 75% for the strict evaluation and 64% for the relaxed evaluation. These are very promising results: given that the database of test questions is much smaller than that of student conversations, we hypothesize that increasing the number of activities would lead to better recommendation results. However, it is difficult in that the database of activities reflects the diversity of questions raised by students in the forum.
In contrast, the results of the embeddings approach are worse than those obtained when recommending posts and also worse than those of the bags of words approach. We can see in the figures that, while for the bags of words and key phrase approaches all the queries receive, at least, two relevant recommendations, in the case of the word embeddings approach there are a number of queries for which no relevant recommendations are suggested. We have observed that this occurs, particularly, when the queries address topics that use a vocabulary that is shared between different algorithmic schemes and data structures, as it is the case, for instance, of the branch and bound scheme (that frequently mentions the use of heaps), or the Dijkstra, Prim or Kruskal algorithms (that mention the use of graph structures).

6. Conclusions

In this paper, we have presented a content-based method for recommending, given a query posed by a student in a course forum, other discussion threads where similar questions have been raised as well as related reinforcement activities. We have applied our methodology to a course devoted to data structure and algorithms, which is taught in the second year of different Computer Science programs. The results show that, for a given query, the recommender is able to propose both related activities and student conversations with high precision (around 80% on average). Since students usually experience the same difficulties and the queries in the discussion forums are often repeated, automatically recommending other discussions where the same query is solved will help to reduce the teaching efforts. Moreover, the “relaxed” strategy has been shown to be very effective in recommending activities that, while not directly answering the question posed by the student, help her deepen and expand the study of the schema or data structure underlying the problem.
However, it is important to highlight as a limitation of our study the requirement for the forum contents to be accurate and reliable, as recommending erroneous content to students would be undesirable. This can only be achieved in forums moderated by the course instructors, as is in our study case. Another limitation of our study is the difficulty for applying conventional state-of-the-art approaches within our specific research context. Firstly, the absence of detailed student information in our dataset posed a significant limitation. Many established recommendation techniques rely on access to comprehensive user profiles, a resource we lacked. Secondly, our dataset lacks user preferences and historical interactions since the majority of students only sporadically engage in forum activities. Lastly, the volume and nature of our available data pose challenges for the implementation of certain ranking methods. Consequently, traditional recommendation methods such as Collaborative Filtering, Diversity-Aware, and Fairness-Aware ranking approaches proved unsuitable for our case. These methods typically require the ability to identify groups of similar users, provide diverse item recommendations, and access personal demographic information about the students, none of which we could fulfill. Furthermore, Learning to Rank methods, reliant on labeled data, are inapplicable due to their unavailability in our dataset. These challenges, while limiting our approach, underscore the distinctive nature of our study’s context and dataset.
As part of our future work, we intend to enhance our recommender system by leveraging information about the students and their progress. Specifically, with regard to recommending reinforcement activities, we will categorize these activities into different difficulty levels. This categorization will enable the recommender system to take into account the student’s past performance and tailor recommendations accordingly.
Additionally, we will expand our recommender system to include recommendations for online learning materials, including answers provided by ChatGPT in response to student queries.
Furthermore, we aim to develop a user-friendly interface that allows students to access the recommender system effortlessly from any device and location.
In terms of evaluation, our future plans involve conducting a real-world assessment. This evaluation will encompass both quantitative and qualitative aspects. Quantitatively, we will measure the average improvement of students in their final course assessments when they have used the recommender system throughout the semester. Qualitatively, we will measure the overall student satisfaction with the tool.

Author Contributions

All authors conceived the experiment(s). L.P. conducted the experiments and analyzed the results. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the 2022 UNED project for the innovations’ teaching group GID2017-1.

Institutional Review Board Statement

All methods were carried out in accordance with the UNED’s guidelines and regulations. All data gathered from the course forum were correctly anonymized and only accessed by the persons (instructors) authorized by the students and the institution. Since no human participation was involved, no ethical approval was necessary.

Data Availability Statement

Data are unavailable due to privacy restrictions.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

    The following abbreviations are used in this manuscript:
UNEDUniversidad Nacional de Educación a Distancia
POSPart Of Speech
STSentence Transformers
P@KPrecision at K
BERTBidirectional Encoder Representations from Transformers

Appendix A

This appendix provides an example of the recommendations provided by our system for both forum conversations and reinforcement activities. Specifically, the examples presented here are obtained when embeddings are used to represent the text.
Below, we showcase the first recommended conversations for the following real question posed by a student in the course discussion forum (originally in Spanish):
Question posed by the student
Message no. 1
Sent by: XXX on XXX
Title: Prim vs. Kruskal
-
Good morning.
On page 71, at the end of the section on the cost of the Kruskal algorithm,
it states:
“... if the graph is dense, the Prim algorithm is preferable since its
cost is O(n^2). However, if the graph is sparse, the Kruskal algorithm
is more suitable.”
In the case of a dense graph, it is clear. But in the case of a
sparse graph, if we use adjacency lists in the Prim algorithm,
wouldn’t it be the same as using Prim or Kruskal? Both have a
cost of O(a log n).
Thank you.
First recommendation
Message no. 1
Sent by: XXX on XXX
Tittle: Minimum Spanning Trees: Prim vs. Kruskal Algorithm
-
I have two questions:
(1)-Regarding the cost of the two algorithms, I read (and I
believe I understand) that, in the case of Prim, when implemented
with adjacency matrices, the cost is O(n^2), while if implemented
with adjacency lists and heaps, the cost is O(n log(n)). This
implementation will mainly affect the Distance(x, y) function,
is that correct? Therefore, in the case of dense graphs, where
the number of edges approaches n^2, it is preferable to implement
it with matrices. When studying the cost of the Kruskal algorithm,
at the end of the section (page 71), it is said that if the graph
is dense, it is better to use Prim, and if it is sparse, Kruskal
is better, but always when implemented with matrices, right?
(2)-Regarding the optimality proof of both algorithms, they are
based on Lemma 3.2.1 (page 65). When reading it, I do not under-
stand the phrase, “Let (u, v) be the edge of the smallest weight
leaving NA...” How can an edge “leave a set of nodes?” Does it
mean that it is the edge of the smallest weight leaving a node in
NA (u) and going to another that does not belong to NA (v)?
Thank you in advance.
Second recommendation
Message no. 1
Sent by: XXX on XXX
Tittle: Cost of Kruskal’s Algorithm, page 71
-
Hello,
Regarding the cost of the algorithm, you mention that the “Buscar
Componente Conexa” (Connected Component Search) function will be
executed 2*a times, and you assign it an implicit constant cost
in order to disregard it compared to the sorting cost. However,
I understand that in the worst case, the “Buscar Componente Conexa”
function will have a cost of “n” because it will be necessary to
search the node in all components, and in each component, check
each of its nodes unless there is some additional data structure
in the node that allows knowing to which component it belongs. But
in that case, we would be increasing the cost of “Fusionar” (Merge)
to “n” to update the information of all nodes in the components
that are merged.
Greetings.
Third recommendation
Message no. 1
Sent by: XXX on XXX
Title: Question about Prim’s Algorithm
-
In Prim’s algorithm, on page 69, the line reads:
if Distance(j, node) < costeMinimo[j] \textLambda costeMinimo[j] \textneq
-1 then
Isn’t the condition \textLambda costeMinimo[j] \textneq -1 redundant?
In other words, could it be eliminated since, assuming costeMinimo[j]=-1,
this condition implies that Distancia(j, node) < costeMinimo[j] is not
satisfied (since Distancia(j, node) is always greater than or equal
to 0), and therefore, it would not enter the if statement?
In case I am mistaken and the condition \textLambda costeMinimo[j]
\textneq -1 is indeed not redundant, could you provide an example
where this condition is necessary?
Greetings.
Next, we showcase the first recommended reinforcement activities for the same student’s question (again, translated from Spanish):
First recommendation
The Prim and Kruskal algorithms are greedy algorithms applied when
calculating a minimum spanning tree. Regarding both algorithms,
which of the following statements is true?
    A. When initializing the Prim algorithm, it selects an
    arbitrary node, while Kruskal chooses the node with the
    fewest edges.
    B. Prim starts from an arbitrary node, while Kruskal starts
    from the set of edges ordered from lowest to highest cost.
    C. Prim is more efficient in the case of a very sparse graph,
    while Kruskal is more efficient if the graph is very dense.
    D. None of the above.
Second recommendation
Which of the following statements is true regarding the cost of
some algorithms and their efficiency?
    A. Kruskal’s algorithm has a cost that is in O(n log n).
    B. Prim’s algorithm, when the adjacency list is combined
    with the use of a heap, is more efficient when the graph is
    dense because this way the cost is in O(n log n).
    C. The cost of the Quicksort algorithm in the worst case is of
    order O(n log n).
    D. The function that creates a heap from a collection of values,
    if the Sink procedure is used, can have a linear cost.
Third recommendation
Given the undirected graph in the figure, indicate the order in
which the nodes would be selected (become part of the tree) when
applying Prim’s algorithm starting from node A:
    A. A, F, E, C, B, D
    B. A, B, C, E, D, F
    C. A, C, D, B, F, E
    D. None of the above

References

  1. Dhawan, S. Online Learning: A Panacea in the Time of COVID-19 Crisis. J. Educ. Technol. Syst. 2020, 49, 5–22. [Google Scholar] [CrossRef]
  2. Adkins, S.S. The 2019 Global Learning Technology Investment Patterns: Another Record Shattering Year; Technical Report; Metaari’s Analysis of the 2019 Global Learning Technology Investment Patterns; Metaari: Monroe, WA, USA, 2020. [Google Scholar]
  3. Arkorful, V.; Abaidoo, N. The role of e-learning, the advantages and disadvantages of its adoption in Higher Education. Int. J. Educ. Res. 2014, 2, 397–410. [Google Scholar]
  4. Tuckman, B. Relations of academic procrastination, rationalizations, and performance in a web course with deadlines. Psychol. Rep. 2005, 96, 1015–1021. [Google Scholar] [CrossRef]
  5. Bakia, M.; Shear, L.; Toyama, Y.; Lasseter, A. Understanding the Implications of Online Learning for Educational Productivity. Technical Report; U.S. Department of Education Office of Educational Technology: Washington, DC, USA, 2012.
  6. Twigg, C. Improving quality and reducing cost: Designs for effective learning. Change 2003, 35, 22–29. [Google Scholar] [CrossRef]
  7. Twigg, C. Improving learning and reducing costs: New models for online learning. Educ. Rev. 2003, 38, 28–38. [Google Scholar]
  8. Dumford, A.D.; Miller, A.L. Online Learning in Higher Education: Exploring Advantages and Disadvantages for Engagement. J. Comput. High. Educ. 2018, 30, 452–465. [Google Scholar] [CrossRef]
  9. Plaza, L.; Araujo, L.; López-Ostenero, F.; Martínez-Romo, J. Use of advanced natural language processing techniques for the automatic recommendation of reinforcement activities. In INTED2021 Proceedings; IATED: Valencia, Spain, 2021; pp. 5699–5705. [Google Scholar]
  10. Norman, M. Three Ways to Encourage Conversation in Online Discussion Forums. 2016. Available online: https://ctl.wiley.com/three-ways-to-encourage-conversation-in-online-discussion-forums/ (accessed on 18 September 2023).
  11. Irish, I.; Chatterjee, S.; Tailor, C.; Finkelberg, R.; Arriaga, R.; Starner, T. Post Recommendation System Impact on Student Participation and Performance in an Online AI Graduate Course. In Proceedings of the Ninth ACM Conference on Learning @ Scale, Roosevelt Island, NY, USA, 1–3 June 2022; pp. 24–34. [Google Scholar]
  12. Duque, A.; Fabregat, H.; Araujo, L.; Martinez-Romo, J. A keyphrase-based approach for interpretable ICD-10 code classification of Spanish medical reports. Artif. Intell. Med. 2021, 121, 102177. [Google Scholar] [CrossRef] [PubMed]
  13. Yengin, I.; Karahoca, D.; Karahoca, A.; Yücel, A. Roles of teachers in e-learning: How to engage students and how to get free e-learning and the future. Procedia-Soc. Behav. Sci. 2010, 2, 5775–5787. [Google Scholar] [CrossRef]
  14. Resnick, P.; Varian, H.R. Recommender Systems. Commun. ACM 1997, 40, 55–58. [Google Scholar] [CrossRef]
  15. Park, D.; Kim, H.; Choi, I.; Kim, J. A literature review and classification of recommender systems research. Expert Syst. Appl. 2010, 39, 10059–10072. [Google Scholar]
  16. Prins, F.J.; Nadolski, R.J.; Berlanga, A.J.; Drachsler, H.; Hummel, H.G.; Koper, R. Competence description for personal recommendations: The importance of identifying the complexity of learning and performance situations. Educ. Technol. Soc. 2008, 11, 141–152. [Google Scholar]
  17. Al-Badarneh, A.; Alsakran, J. An Automated Recommender System for Course Selection. Int. J. Adv. Comput. Sci. Appl. 2016, 7, 166–175. [Google Scholar] [CrossRef]
  18. Liu, J.; Wang, X.; Liu, X.; Yang, F. Analysis and design of personalized recommendation system for university physical education. In Proceedings of the International Conference on Networking and Digital Society, Wenzhou, China, 30–31 May 2010; Volume 2, pp. 472–475. [Google Scholar]
  19. Pinto, F.M.; Estefania, M.; Cerón, N.; Andrade, R.; Campaña, M. iRecomendYou: A Design Proposal for the Development of a Pervasive Recommendation System Based on Student’s Profile for Ecuador’s Students’ Candidature to a Scholarship. In New Advances in Information Systems and Technologies: Volume 2; Springer International Publishing: Cham, Switzerland, 2016; Volume 445, pp. 537–546. [Google Scholar]
  20. Ray, S.; Sharma, A. A Collaborative Filtering Based Approach for Recommending Elective Courses. In Information Intelligence, Systems, Technology and Management: 5th International Conference, ICISTM 2011, Gurgaon, India, 10–12 March 2011. Proceedings 5; Springer: Berlin/Heidelberg, Germany, 2011; Volume 141, pp. 330–339. [Google Scholar]
  21. Valdiviezo-Díaz, P.; Aguilar, J.; Riofrío, G. A fuzzy cognitive map like recommender system of learning resources. In Proceedings of the IEEE International Conference on Fuzzy Systems, Vancouver, BC, Canada, 24–29 July 2016; pp. 1539–1546. [Google Scholar]
  22. Ansari, M.H.; Moradi, M.; Nikrah, O.; Kambakhs, K. CodERS: A hybrid recommender system for an E-learning system. In Proceedings of the 2nd International Conference of Signal Processing and Intelligent Systems, Tehran, Iran, 14–15 December 2016; pp. 1–5. [Google Scholar]
  23. Bourkoukou, O.; Bachari, E.E.; El, M. A Personalized E-Learning Based on Recommender System. Int. J. Learn. Teach. 2016, 2, 99–103. [Google Scholar] [CrossRef]
  24. Chau, H.; Barria-Pineda, J.; Brusilovsky, P. Learning Content Recommender System for Instructors of Programming Courses. In Artificial Intelligence in Education: 19th International Conference, AIED 2018, London, UK, 27–30 June 2018, Proceedings, Part II 19; Springer International Publishing: Cham, Switzerland, 2018; Volume 10948, pp. 47–51. [Google Scholar]
  25. Singh, A.; P, D.; Raghu, D. Retrieving similar discussion forum threads: A structure based approach. In Proceedings of the SIGIR’12—Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval, Portland, OR, USA, 12–16 August 2012. [Google Scholar]
  26. Duan, H.; Zhai, C. Exploiting Thread Structures to Improve Smoothing of Language Models for Forum Post Retrieval. In Proceedings of the Advances in Information Retrieval, Dublin, Ireland, 18–21 April 2011; pp. 350–361. [Google Scholar]
  27. Papadimitriou, D.; Koutrika, G.; Velegrakis, Y.; Mylopoulos, J. Finding Related Forum Posts through Content Similarity over Intention-Based Segmentation. IEEE Trans. Knowl. Data Eng. 2017, 29, 9. [Google Scholar] [CrossRef]
  28. Pattabiraman, K.; Sondhi, P.; Zhai, C. Exploiting Forum Thread Structures to Improve Thread Clustering. In Proceedings of the 2013 Conference on the Theory of Information Retrieval, Copenhagen, Denmark, 29 September–2 October 2013; pp. 64–71. [Google Scholar]
  29. Li, M.; Gao, W.; Chen, Y. A Topic and Concept Integrated Model for Thread Recommendation in Online Health Communities. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, Virtual Event, Ireland, 19–23 October 2020; pp. 765–774. [Google Scholar]
  30. Lan, A.S.; Spencer, J.C.; Chen, Z.; Brinton, C.G.; Chiang, M. Personalized Thread Recommendation for MOOC Discussion Forums. In Proceedings of the Machine Learning and Knowledge Discovery in Databases, Würzburg, Germany, 16 September 2019; pp. 725–740. [Google Scholar]
  31. Zhu, P.; Hauff, C.; Yang, J. MOOC-Rec: Instructional Video Clip Recommendation for MOOC Forum Questions. In Proceedings of the 15th International Conference on Educational Data Mining, Durham, UK, 24–27 July 2022; pp. 705–709. [Google Scholar]
  32. Irish, I.; Chatterjee, S.; Jivani, S.; Jia, X.; Lee, J.; Arriaga, R.; Starner, T. Managing the Chaos: Approaches to Navigating Discussion Forums for Instructional Staff. In Proceedings of the 10th ACM Conference on Learning @ Scale, Copenhagen, Denmark, 20–22 July 2023; pp. 406–410. [Google Scholar]
  33. Reimers, N.; Gurevych, I. Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv 2019, arXiv:1908.10084. [Google Scholar]
Figure 1. Recommender architecture.
Figure 1. Recommender architecture.
Asi 06 00083 g001
Figure 2. Evaluation results for the bags of words representation.
Figure 2. Evaluation results for the bags of words representation.
Asi 06 00083 g002
Figure 3. Evaluation results for the key phrases representation.
Figure 3. Evaluation results for the key phrases representation.
Asi 06 00083 g003
Figure 4. Evaluation results for the word embeddings representation.
Figure 4. Evaluation results for the word embeddings representation.
Asi 06 00083 g004
Figure 5. Evaluation results for the bags of words representation.
Figure 5. Evaluation results for the bags of words representation.
Asi 06 00083 g005
Figure 6. Evaluation results for the key phrases representation.
Figure 6. Evaluation results for the key phrases representation.
Asi 06 00083 g006
Figure 7. Evaluation results for the word embeddings representation.
Figure 7. Evaluation results for the word embeddings representation.
Asi 06 00083 g007
Table 1. Precision at 5 for the different strategies for the recommendation of posts. Best results are highlighted in bold.
Table 1. Precision at 5 for the different strategies for the recommendation of posts. Best results are highlighted in bold.
SimilarityCriterionP@5
Bags of wordsProblem (strict)0,56
Scheme (relaxed)0,69
Bag of key phrasesProblem (strict)0,60
Scheme (relaxed)0,82
EmbeddingsProblem (strict)0,58
Scheme (relaxed)0,73
Table 2. Precision at 5 for the different strategies for the recommendation of reinforcement activities. Best results are highlighted in bold.
Table 2. Precision at 5 for the different strategies for the recommendation of reinforcement activities. Best results are highlighted in bold.
SimilarityCriterionP@5
Bags of wordsProblem (strict)0,64
Scheme (relaxed)0,75
Bag of key phrasesProblem (strict)0,65
Scheme (relaxed)0,80
EmbeddingsProblem (strict)0,45
Scheme (relaxed)0,63
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Plaza, L.; Araujo, L.; López-Ostenero, F.; Martínez-Romo, J. Automatic Recommendation of Forum Threads and Reinforcement Activities in a Data Structure and Programming Course. Appl. Syst. Innov. 2023, 6, 83. https://doi.org/10.3390/asi6050083

AMA Style

Plaza L, Araujo L, López-Ostenero F, Martínez-Romo J. Automatic Recommendation of Forum Threads and Reinforcement Activities in a Data Structure and Programming Course. Applied System Innovation. 2023; 6(5):83. https://doi.org/10.3390/asi6050083

Chicago/Turabian Style

Plaza, Laura, Lourdes Araujo, Fernando López-Ostenero, and Juan Martínez-Romo. 2023. "Automatic Recommendation of Forum Threads and Reinforcement Activities in a Data Structure and Programming Course" Applied System Innovation 6, no. 5: 83. https://doi.org/10.3390/asi6050083

Article Metrics

Back to TopTop