Published on

How to create an interactive programming course?

How to create an interactive programming course?

Learning to code is becoming interactive. In the next few years, all the video tutorials, especially the basic ones, will be replaced with highly interactive and hands-on-practice coding courses.

To stay relevant as a content creator, you have to move away from just "videos". People do not want to binge watch when they can do quizzes, and people do not want to do quizzes when they can practice hands-on. Codedamn helps you build the courses today that you'll build 7 years down the line eventually.

Note: This article is for content creators and people who want to create/sell their interactive courses with codedamn. If you're a learner/student, check out codedamn learning paths

Interactivity with programming courses

Programming is a highly hands-on-practice learning thing. As a developer, you will agree the importance of learning by doing. However, current methods are largely consuming video based content. Codedamn helps you solve it by building an engaging and hands-on course.

Creating interactive course on codedamn

Let's take a look at how you can create an interactive course on codedamn. Before that, feel free to try out an existing course - Creating a URL shortener with MongoDB and Node.js

Step 1 - Get a creator account

All the magic happens on codedamn creator website. If you already have credentials to access that, you're good. If not, please apply to become a content creator here. Currently this is a closed group of content creators only. We'll get back to you in 24 hours.

Step 2 - Creating a classroom

A classroom on codedamn is a collection of interactive labs that constitutes your course.

Once you log into the dashboard on codedamn creator panel, head over to creating your first classroom by clicking the button.

Button to create a classroom on codedamn

Step 3 - Fill the basic details

Now you can fill basic details about the classroom. This metadata information is directly reflected on your public course page.

Step 4 - Creating your first lab

Once you create the classroom, you'll be immediately redirected to the curriculum page. This is the page where you can add and edit individual labs.

A lab on codedamn is part of the classroom where you can teach a concept. This concept could be independent, or it could be a build up for the next concept in your interactive course.

  • Click on the "Create New Lab" button you see.
  • You'll see the following interface. I'll break it down in 2 parts: Creating the lab and testing the lab.

Creating Lab

Here's the information for each number:

(1) Problem Title - This is the title of your lab, the concept you're teaching in this exercise. (Example - Creating your first regular expression)

(2) Problem statement - This is a description of the lab you're creating. This is where you should write all the helper text and information user should read and understand before starting to attempt a lab/challenge.

(3) Programming template - This is what the environment user will get and the environment your code will be executed. As a recommendation, use the following templates for the following cases:

Available programming templates
  • Plain HTML/CSS/JS for any static HTML/CSS/JS concept (CSS concepts, vanilla JS, browser based JavaScript things)
  • React 16.13 for anything "React" (uses snowpack template)
  • Next.js 9.5 for Next.js based labs
  • Node.js Plain for anything involving node.js (example - Node.js servers, REST API calls, etc.). Note that choosing this image also gives users ability to use MongoDB locally.
  • Python 3.8 - Use for python related labs
  • Others - all the other images with those softwares installed

Note: Feel free to request for any new language/runtime/framework in which you'd want to create a course and we'll add support for it.

(4) Web Browser - If this is checked, users will see a web browser preview on their lab page showing output from a server listening on port 1337.

One important thing to note here is that anything you run on port 1337 in the code is automatically mapped to a public URL that the user can see in the web preview. (Ignore the URL .dns.codedamn.com... as it is randomly generated)

(5) Default Files Repository and (6) Branch Name - This is the URL (and branch name) of a github repository which will be cloned before the user starts the lab. This provides a starting point for people in your lab.

For example, if you put https://github.com/codedamn-classrooms/react-accordion-project as default file repository and lab5 as default file branch name, here's what the user will see when they open the lab:

The GitHub repository is cloned for the user to interact with

(7) Default Open Tabs - This allows you to open a few files by default for better UX. For example, in the image above, if you'd like Question.jsx, App.jsx and snowpack.config.json files to be opened automatically, you'll write the following in the Default Open Tabs field: src/Question.jsx, src/App.jsx, snowpack.config.json

The first file (`src/Question.jsx`) will be the one which is visible, the other two will be in the tabs. This will open all 3 files in the order. Note that your users can always switch to different files from the file menu later.

Great! You have almost created the lab. Now let us see how we can test the user code.

Testing User Code

Testing user code is important because we want them to have accountability and a "virtual" mentor to check if they passed the tests and learned the concepts. Let's look at part 2:

(8) Test File - This is the most important part of the testing interface. However, this gets enabled (and we will discuss this) once the lab is created.

(9) Challenges - This is a list of challenges you expect your users to pass. For example. For example, look at the challenges in this lab here:

These challenges can be created by adding them in the following way:

These challenges get "linked" to the test file automatically, as we'll see soon.

(10) Compile command - This command is for languages which need a compilation step. For example - C, C++, Java, etc. The reason this section is separate from "Run command" is to help users identify if their code is failing at compilation step or runtime.

If you're using any language which does not require compilation (example - Python, JavaScript, HTML/CSS), you can leave this field empty.

(11) Test command - This is the command which will run your "test file" (which we still have to discuss). The test file you write will be stored on the cloud and will be pulled everytime with a random name. Therefore, you cannot just write node starttest.js as the file name might not be starttest.js.

Fortunately, codedamn exposes the just cloned test file in an environment variable called TEST_FILE_NAME. Therefore, you can run the test in the following way:

node $TEST_FILE_NAME

# Bash will use the value inside TEST_FILE_NAME (which points to the cloned test file)

Similarly, if you were to run a python test, you'll write:

python3 $TEST_FILE_NAME

and it'll run your test file!

(12) Solution link - This is an optional field, and a link to a resource which can point to anything you want (github repo, youtube video, blog post, etc.) which will take user to the that resource. If present, it is displayed as a shiny green button on the challenge page:

Clicking on the green button will open the link you specified in a new tab

(13) Lab State - This field allows you to set a few options:

  • Live Mode with debugging - This option means that your lab is live, and the user can see the stdout/stderr from their code when executed against test. They will get the logs when they press "Run Test" button here:
When the user "Run Tests", they'll see the stdout/stderr in the execute logs

Note that this option should almost always be preferred unless you are creating a "real test". This is because this method, although helpful to users for debugging, can also be exploited to read your test file (which we still need to discuss), hence exposing how your test file works.

  • Live Mode (no output) - This mode is similar to above, with the exception that no execute logs will be sent to the user on code run. They can still play and interact with the whole IDE like they used to.
  • Disable Lab - This disables the ability for everyone to run the lab (even you).

Once you press the save button now. The view will change a little. You'll have a few more buttons at the top and bottom, and the Edit Test File button will be enabled.

Let's discuss 15, 16, 17 first.

(15) Update Lab - This button of course saves any changes you made to the lab.

(16) Test Lab - This button will open a new tab where you can test the lab interface like a user. Note that you will need a regular codedamn account for this. If you see a message "Please upgrade your account for more access", creators can contact us for the limit reset.

(17) Delete Lab - Well, this deletes your lab.

Let us now discuss Edit Test File button as it is super important. Click on this button.

Adding a Test File

Once you click on Edit Test File Button, you'll see that you are presented with the following interface:

Remember when you wrote node $TEST_FILE_NAME or python3 $TEST_FILE_NAME in the run command? This is the file which will run from your command. There are a few key roles of this file:

  1. Test the user code
  2. Write results to a special file for marking challenges as correct or incorrect

First of all, let's click on Python Template button to understand the basics:

There are 4 important parts in the test file.

The top instructions (1) - these are helpful hints and environment variable information. Although we have discussed a lot of it already, I would highly recommend reading this.

The setup (2) - You have to import relevant test suites/user code from this place. You might see we are using the environment variable USER_CODE_DIR. This is the user directory where your default github repo is cloned.

For example, if you just cloned 1 file called index.html, you can find it at $USER_CODE_DIR/index.html. The environment variable $USER_CODE_DIR is a full path to that file.

Another important thing here to observe is an empty array results. In the tests which you'll see below, you'll observe we push only true or false values in this array. This is important as the values inside this array is directly linked to the success/fail status on challenges you created before. We will discuss this more very soon.

The test (3) - This is super important. This is the part where you'll test the user code. For example, the test above is:

try:
    userscript = importlib.import_module('script2')
    assert userscript.variable1 == 100, "variable1 should be set as 100"
    # test passed
    print("Test passed")
    results.append(True)
except:
    # test failed
    print("Your test failed", sys.exc_info())
    results.append(False)

You run your code and if it passes, you append true boolean to the results array. If the test fails, you append false

You can create as many tests as you want, and every single test which appends true/false to results will be linked to the challenges you create. (More on this very soon)

The result (4) - Finally, you simply write a boolean array of true and false to a file pointed towards UNIT_TEST_OUTPUT_FILE environment variable. This is a fixed boilerplate code and you'll never really need to change this.

Let us look at a few examples to understand better how evaluation works on codedamn:

Let's say your results array finally looks like this: [True, False, True, True, False]. Let's say you have 5 challenges in the UI. The challenge:

and the results array:

This combination of test file and challenges will create the following output everytime the user runs using the Run Code button:

Therefore, 2 things you should take care of:

  1. Always make sure the length of your results array/list matches the number of challenges you created in the UI. If there are more challenges than booleans in results the rest of the challenges will always stay "wrong". If there are less challenges than results, the result will get discarded for other challenges.
  2. The $UNIT_TEST_OUTPUT_FILE should only consist of a JSON string consisting of an array with true or false values. If there's anything else present in the file, the code will not evaluate any challenge.

Sample Lab (Node.js)

Here's a fully working sample lab to help you give a starting point:

Problem Title

Setting up Express Server

Problem Description

Hello there!

Let’s setup our Node server first of all. We’ll use express as the framework for this classroom as it is easy to use.

If you look inside package.json or inside the terminal available to you, you’ll see you already have express installed, therefore you need not install it manually in the lab. Let us create a route in express first.

In express, you can create routes based on their access method - GET, POST, PUT, PATCH, etc.

To create a GET route, on say /hello-world, we can do the following:

app.get('/hello-world', (req, res) => {
	res.send('Hello from hello-world!')
})

Notice that we write /hello-world as the first parameter, followed by a function which contains req and res objects.

We can access all the information about the request using the req object. And we can send what we want to say to the client using res object. Finally, you see we write res.send to send a simple text message to client.

To pass this lab, let’s perform the following tasks:

  • Create a new GET route in express on /short
  • When you visit /short, you should get Hello from short as the text back.

Programming Template

Node.js Plain 12.x

Web Browser

True

Default Files Repository

https://github.com/codedamn-classrooms/node-mongodb-url-shortner

Default Files Branch Name

lab1

Default Open Tabs

index.js

Challenges

  1. You should create a GET route for <code>/short</code>
  2. The <code>/short</code> route should return <code>Hello from short</code> as the response

Compile Command

<empty>

Run Command

node $TEST_FILE_NAME

https://github.com/codedamn-classrooms/node-mongodb-url-shortner/blob/lab1-sol/index.js

Lab State

Live Mode with debugging

Test File

const fs = require('fs')
const assert = require('assert')
const fetch = require('node-fetch')
const { spawn, execSync } = require('child_process')

function delay(ms) {
	return new Promise((resolve) => setTimeout(resolve, ms))
}

async function retry(fn, ms) {
	try {
		await fn()
	} catch (error) {
		await delay(ms)
		return await retry(fn, ms)
	}
}

function waitForPort(port) {
	execSync(`lsof -a -i tcp:${port} -c node`)
	// if execSync doesn't throw, it was successful
	return true
}

;(async () => {
	const results = []

	// start the user static server
	spawn('bash', ['-c', `cd ${process.env.USER_CODE_DIR} && yarn install && yarn start`])
    
	// wait for app to attach port
	await retry(() => waitForPort(process.env.PUBLIC_PORT), 500)

	// Tests
	try {
		const data = await fetch(`http://localhost:${process.env.PUBLIC_PORT}/short`)
		assert(data.status >= 200 && data.status < 300)
		results.push(true)
		console.log('Test 1 passed')
	} catch (error) {
        console.log('Test 1 failed', error) // this is visible to the end user
		results.push(false)
	}

	try {
		const data = await fetch(`http://localhost:${process.env.PUBLIC_PORT}/short`).then((t) =>
			t.text()
		)
		assert(data.trim().toLowerCase() === 'hello from short')
		results.push(true)
        console.log('Test 2 passed')
	} catch (error) {
        console.log('Test 2 failed', error) // this is visible to the end user
		results.push(false)
	}

	fs.writeFileSync(process.env.UNIT_TEST_OUTPUT_FILE, JSON.stringify(results))
	process.exit(0)
})()

Conclusion

That'll be the end of the lab! Once created, you can distribute your practical learning to millions. Does it take effort? Sure. Is it worth it? Absolutely. With 700k+ minutes of execution in the last 5 months, hands-on-practice is becoming a powerhouse of learning to code.

Are you ready to become a codedamn creator and leverage this infrastructure to create your next awesome course? Apply to become a content creator here.

Also, please note that there are a lot of things where friction can be brought down (like you could see, starting a server, etc. can be converted into utility functions instead of adding them in every file). Therefore as a creator, feel free to tell about any feedback on improving it.

If you have any questions, doubts, please feel free to reach out via contact form or mehul[at]codedamn.com

Join the discussion

Did you enjoy reading the article? Thank the author by sharing it on Twitter.