{
"cells": [
{
"cell_type": "markdown",
"id": "87e72dc7",
"metadata": {},
"source": [
"# \"Upgrading\" Florida Tech's CAPP Reports\n",
"\n",
"Adam Lastowka\n",
"\n",
"## The Problem\n",
"\n",
"Florida Tech's course catalog is pretty good. It's got nice hyperlinking, helpful popups, and a reliable search system. It even includes degree requirements for various majors. Let's see a screenshot:\n",
"\n",
"\n",
"\n",
"Cool! Not too hard to follow, and very nice and interactive.\n",
"\n",
"But what if I want to check which of those degree requirements I'm meeting?\n",
"\n",
"Florida Tech's PAWS system for students has a handy tool just for that -- CAPP Reports (Curriculum, Advising, and Program Planning). Students can generate \"reports\" that tell them which requirements they need to meet / have met so far. Let's see how it looks:\n",
"\n",
"
\n",
"\n",
"Bleh. And this is just what I could fit in a screenshot -- these tables go on for pages. They aren't linked to the course catalog, so while planning courses, you need to have multiple tabs open while switching back and forth, double-checking that you entered the CRN right.\n",
"\n",
"What I'd like to have is something like this:\n",
"\n",
"
\n",
"\n",
"As a very visual person, a diagram like this one from UCol Boulder is what I need in order to properly think about my courses. The list in the course catalog is nice, but I like flowcharts -- especially ones that automatically track my progress.\n",
"\n",
"## The Solution\n",
"\n",
"Yes, I could just draw a flowchart for the Physics major. This would be simple and easy. Unfortunately, I am a computer programmer, and consequently duty-bound to automate any menial task I encounter.\n",
"\n",
"In this document I'll develop a procedure for transforming a CAPP report into a personalized, readable course dependency flowchart constructed from publicly available data.\n",
"\n",
"First we'll need Florida Tech's course catalog.\n",
"\n",
"## Extracting the Catalog\n",
"\n",
"1. The course catalog does not have an API, and is stored in its entirety on a series of webpages found at https://catalog.fit.edu/content.php?catoid=12&navoid=551. Download the .html files of all 26 of these pages and save them.\n",
"2. Combine the pages together. Open a linux shell (I'm on Windows, so I used WLS Ubuntu) and run `ls -tQ *.html | xargs cat > concat.html`\n",
"3. This page contains the course names, but not their descriptions, which are only shown when a course name is clicked on. Thankfully, this page *does* contain links to individual course pages with these descriptions on them. This command extracts the links to these locations (first grep gets lines, second gets links, then sed removes those pesky \"amp;\"s):\n",
"```bash\n",
" grep -e $'
tags because a couple of courses, namely PSY 6550 and\n", " # MAR 6899, are formatted differently and have
tags in their HTML. What's up with those?\n", " c.credit_hours = remove_tags(in_between(re.sub('
', '', sections[0]), '', '
'))[14:]\n",
"\n",
" c.description = sections[0].split('
')[1]\n",
" # Some of the descriptions say \"complements [some other course]\" or \"builds on [course]\"\n",
" # I use a catch-all here to grab those instances\n",
" if 'complements' in c.description.lower():\n",
" c.complements_courses = re.findall('[A-Z][A-Z][A-z] [0-9][0-9][0-9][0-9]', c.description)\n",
"\n",
" #print(sections[1])\n",
" for i in range(1, len(sections)):\n",
" # remove tags, strip, and remove non-breaking space character escapes\n",
" sec_notag = re.sub(' ', '', remove_tags(sections[i]).lstrip())\n",
"\n",
" # It's one of those lines that looks like \"(CC) (HU/SS) (LA) (Hon)\"\n",
" if re.sub('Hon', 'H', sec_notag).isupper():\n",
" tagcat = re.sub('\\(|\\)', ' ', sections[i])\n",
" # tagcat = re.sub('/', ' ', tagcat) # uncomment to remove the bipartite tags (like 'HU/SS')\n",
"\n",
" # remove empty strings\n",
" c.tags = list(filter(None, tagcat.split(' ')))\n",
"\n",
" elif sec_notag.startswith('Requirement'):\n",
" c.requirement = sec_notag.split(':')[1][1:]\n",
"\n",
" elif sec_notag.startswith('Prerequisite'):\n",
" \n",
" # get the junk out\n",
" sec_notag = re.sub(',', '', sec_notag)\n",
" sec_notag = re.sub('\\xa0', ' ', sec_notag)\n",
" \n",
" # ECE3331 has a typo in this section (no second space in 'ECE 3331or')\n",
" # fix those sorts of errors here\n",
" \n",
" # before we sub out the \"OR\"s, we need to sub out the \"cORequisites\" (this created some errors, heh)\n",
" sec_notag = re.sub('Corequisite', ' CRQ ', sec_notag)\n",
" sec_notag = re.sub('or', ' or ', sec_notag)\n",
" sec_notag = re.sub('and', ' and ', sec_notag)\n",
" \n",
" sec_notag = re.sub('/ {2,}/g', ' ', sec_notag)\n",
"\n",
" # split the prerequisite requirements\n",
" # e.g. '(PHY 0000 and MTH 0001) or BIO 1111'.split(' ')\n",
" # remove all extraneous strings from array\n",
" prereq_cmds = sec_notag.split(':')[1].split(' ')[1:-1]\n",
" while not prereq_cmds[-1]:\n",
" del prereq_cmds[-1]\n",
" c.prerequisites = prereq_cmds\n",
"\n",
" # corequisites are stored on the same line as prerequisites, parse them similarly\n",
" if 'CRQ' in sec_notag:\n",
" # slightly different slicing for formatting reasons\n",
" coreq_cmds = sec_notag.split(':')[2].split(' ')[1:]\n",
" while not coreq_cmds[-1]:\n",
" del coreq_cmds[-1]\n",
" c.corequisites = coreq_cmds\n",
"\n",
" elif sec_notag.startswith('Recommended'):\n",
" sec_notag = re.sub(',', '', sec_notag)\n",
" sec_notag = re.sub('\\xa0', ' ', sec_notag)\n",
" # this section has less consistent formatting, so we have to walk through it line-by-line\n",
" if \":\" in sec_notag:\n",
" part = sec_notag.split(':')[1]\n",
" starr = re.findall('(?:to|or|and).[A-Z]{3}\\s[0-9]{4}', part)\n",
" if len(starr) > 0:\n",
" if starr[0].startswith('to '):\n",
" starr[0] = starr[0][3:]\n",
" rec_cmds = []\n",
" for s in starr:\n",
" rec_cmds += s.split(' ')\n",
" c.recommended = rec_cmds\n",
" else:\n",
" # yeah, this is a possibility. For just one course. :|\n",
" c.recommended = [sec_notag]\n",
"\n",
" c.complements_courses = reformat_list(c.complements_courses)\n",
" c.corequisites = reformat_list(c.corequisites)\n",
" c.prerequisites = reformat_list(c.prerequisites)\n",
" c.recommended = reformat_list(c.recommended)\n",
"\n",
" course_list.append(c)\n",
" return course_list"
]
},
{
"cell_type": "markdown",
"id": "f0e36721",
"metadata": {},
"source": [
"### Network Construction\n",
"\n",
"I think that all things considered, the above code does a great job of getting course information from the HTML. The Course list is modular enough to stand on its own and be used for other means -- making the network is a secondary analysis step."
]
},
{
"cell_type": "code",
"execution_count": 35,
"id": "7b6438fd",
"metadata": {},
"outputs": [],
"source": [
"# For meaningful coloring\n",
"def unique_depts(course_list):\n",
" return list(set([x.course_code for x in course_list]))\n",
"\n",
"def col_to_hex(x):\n",
" return '#{0:02x}{1:02x}{2:02x}'.format(max(0, min(int(x[0]*175.0 + 80.0), 255)), max(0, min(int(x[1]*175.0 + 80.0), 255)), max(0, min(int(x[2]*175.0 + 80.0), 255)))"
]
},
{
"cell_type": "code",
"execution_count": 36,
"id": "a931a40f",
"metadata": {},
"outputs": [],
"source": [
"# Dumb wrapper block made for show_network.\n",
"# Warning: This whole cell of code is inefficient, hacky, and messy.\n",
"# A lot of bugfixing and database correction happened here. I'm going to leave it\n",
"# as-is for now, since this visualization is tangential to the project.\n",
"\n",
"def add_node_group(data, DG, dept_list, dept_color_map, course_list, courses_pos):\n",
" xp = 0\n",
" yp = 0\n",
" # Add an edge going to every node mentioned, no matter where it's mentioned.\n",
" # Ignore the operators, too: [(), and, or]\n",
" for x in data:\n",
" if len(x) == 7 and x not in DG.nodes() and x[:3] in dept_list:\n",
" if courses_pos != None:\n",
" xp = courses_pos[x]['x']\n",
" yp = courses_pos[x]['y']\n",
" DG.add_node(x, shape='box', color=col_to_hex(dept_color_map[x[:3]]), title=\"No Data!\", x=xp, y=yp)\n",
"\n",
"def show_network(course_list, courses_pos = None):\n",
" DG = nx.DiGraph()\n",
" \n",
" dept_list = unique_depts(course_list)\n",
" \n",
" # It's been a while since I touched Python. I forgot how fun one-liners like these are.\n",
" dept_color_map = dict(zip(dept_list, distinctipy.get_colors(len(dept_list))))\n",
" course_map = dict(zip([x.course_id for x in course_list], course_list))\n",
" \n",
" for c in course_list:\n",
" if len(c.prerequisites) + len(c.corequisites) + len(c.recommended) + len(c.complements_courses) > 0:\n",
" xp = 0\n",
" yp = 0\n",
" if courses_pos != None:\n",
" xp = courses_pos[c.course_id]['x']\n",
" yp = courses_pos[c.course_id]['y']\n",
" DG.add_node(c.course_id, shape='box', dept=c.course_code, color=col_to_hex(dept_color_map[c.course_code]), title=\"No Data!\", x=xp, y=yp)\n",
" \n",
" add_node_group(c.prerequisites, DG, dept_list, dept_color_map, course_list, courses_pos)\n",
" add_node_group(c.corequisites, DG, dept_list, dept_color_map, course_list, courses_pos)\n",
" add_node_group(c.recommended, DG, dept_list, dept_color_map, course_list, courses_pos)\n",
" add_node_group(c.complements_courses, DG, dept_list, dept_color_map, course_list, courses_pos)\n",
" \n",
" for x in c.prerequisites:\n",
" if len(x) == 7:\n",
" DG.add_edge(c.course_id, x, title='Prerequisite')\n",
" for x in c.corequisites:\n",
" if len(x) == 7:\n",
" DG.add_edge(c.course_id, x, title='Corequisite')\n",
" for x in c.recommended:\n",
" if len(x) == 7:\n",
" DG.add_edge(c.course_id, x, title='Recommended Knowledge')\n",
" for x in c.complements_courses:\n",
" if len(x) == 7:\n",
" DG.add_edge(c.course_id, x, title='Complements')\n",
" \n",
" # Hey, guess what: AEE3150 references courses that DON'T EXIST. I think they messed up when typing the code?\n",
" # this is stupid the data is stupid this whole entire function is stupid\n",
" # duuuuuuuuuuuuuuhhhhhhhhhhhhh\n",
" DG.nodes['MAE3083']['x'] = DG.nodes['AEE3150']['x']\n",
" DG.nodes['MAE3083']['y'] = DG.nodes['AEE3150']['y']\n",
" DG.nodes['MAE3161']['x'] = DG.nodes['AEE3150']['x']\n",
" DG.nodes['MAE3161']['y'] = DG.nodes['AEE3150']['y']\n",
" \n",
" # You know, maybe there isn't a real database. Maybe it's just a bunch of unlinked plaintext in an excel file.\n",
" # That would explain all the bad entries, at least.\n",
" for x in DG.nodes():\n",
" if x in course_map:\n",
" DG.nodes[x]['title'] = course_map[x].course_name\n",
" \n",
" # display with pyvis for interactibility\n",
" g=Network(height=900, width=1400, notebook=True, directed=True)\n",
" g.force_atlas_2based()\n",
" g.set_edge_smooth('continuous')\n",
" \n",
" # Hrm. I'd like physics to be on by default, but then visjs tries to stabilize the network,\n",
" # even though I turn that off here. Fix in HTML?\n",
" g.toggle_physics(False)\n",
" g.toggle_stabilization(courses_pos==None)\n",
" \n",
" g.from_nx(DG) # I do love how friendly Python can be (when you're not installing modules, that is)\n",
" g.show_buttons('physics')\n",
" g.show(\"grph.html\")\n",
" # Wow, am using these comments as a blogging platform?\n",
" # Actually, I think I'm just procrastinating writing the PDAG logic.\n",
" # beebeebooboo\n",
" # okay, fine\n",
" return DG"
]
},
{
"cell_type": "code",
"execution_count": 37,
"id": "2227b061",
"metadata": {
"scrolled": true
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"parsing...\n",
"making network...\n",
"done.\n"
]
}
],
"source": [
"print('parsing...')\n",
"CL = parse_html('webpages/descpages/') # This can take a while when run for the first time\n",
"print('making network...')\n",
"\n",
"# I exported all the positions so we don't have to wait for the network to stabilize.\n",
"# I got them with ForceAtlas2 and some manual dragging of components out of local minima\n",
"\n",
"saved_pos = json.loads(open('full_network_positions.json', 'r').read())\n",
"DG = show_network(CL, saved_pos)\n",
"print('done.')"
]
},
{
"cell_type": "markdown",
"id": "96bdf6c8",
"metadata": {},
"source": [
"### First Impressions\n",
"\n",
"NOTE: If you don't want to bother installing the libraries required to run this notebook, you can view the network [here](grph.html).\n",
"\n",
"Let's see what we're working with:\n",
"\n",
"\n",
"\n",
"Neat!\n",
"\n",
"Nodes are courses, edges are relationships between them. Colors are based on departament codes. Obviously it's a [DAG](https://en.wikipedia.org/wiki/Directed_acyclic_graph) even though the directions aren't visible here. We've got a lot of isolated vertices, plenty of small components, and one massive one. The dense cluster on the left is an interesting anomaly. We'll explore that later. For now we need to find a way to handle the logical operators in these prerequisite lists -- stuff like `(MTH0000 and MTH0001) or MTH0002`."
]
},
{
"cell_type": "markdown",
"id": "a2a252e6",
"metadata": {},
"source": [
"### PDAGs (Propositional Directed Acyclic Graphs)\n",
"\n",
"Right now, edge data looks like this:\n",
"\n",
"`[ '(', 'MTH0000', 'and', 'MTH0001', ')', 'or', 'MTH0002']`\n",
"\n",
"In the image from the previous section, all the logical operators and parenthesis are completely ignored. We're losing data.\n",
"\n",
"Enter [PDAGs](https://en.wikipedia.org/wiki/Propositional_directed_acyclic_graph): a means of representing logical expressions in graphs. The concept is fairly simple: A node for every operand and an edge for every operator. If we generate these graphs for every course, we can compose them with the course network (while allowing them to retain some labels pertaining to their type and function).\n",
"\n",
"This isn't too hard to do, especially since our PDAGs are already simplified past negation normal form -- their operators are limited to just conjunction and disjunction. Also, since both `and` and `or` are associative and commutative, we'll use just one `or` node for cases like `MTH1001 or MTH1010 or MTH1702`."
]
},
{
"cell_type": "markdown",
"id": "9b13fcd3",
"metadata": {},
"source": [
"#### Prepping the Data\n",
"\n",
"The dataset has some ambiguous entries that don't use parenthesis despite having multiple operators.\n",
"Let's introduce an order of operations:"
]
},
{
"cell_type": "code",
"execution_count": 38,
"id": "16ce0df6",
"metadata": {},
"outputs": [],
"source": [
"# Issue: not every entry containing multiple operators uses parenthesis (:|).\n",
"# Here are all the tricky cases:\n",
"\n",
"# ['BIO5210', 'and', 'BME5300', 'or', 'CHE5300']\n",
"# ['(', 'CSE1400', 'or', 'MTH1000', 'or', 'MTH1001', 'or', 'MTH1002', 'or', 'MTH1010', 'or', 'MTH1020',\n",
"# 'or', 'MTH1603', 'or', 'MTH1701', 'or', 'MTH1702', 'or', 'MTH2001', 'or', 'MTH2010', 'or', 'MTH2051', \n",
"# 'or', 'MTH2201', 'or', 'MTH3200', ')', 'or', '(', 'MTH1011', 'and', 'MTH1012', ')', 'and', 'PSY1411']\n",
"# ['BIO4101', 'and', 'BIO4110', 'or', 'BIO4111']\n",
"# ['CSE1502', 'or', 'CSE1503', 'or', 'CSE2050', 'and', 'MTH2201']\n",
"# ['BME3030', 'and', 'BME5300', 'or', 'CHE5300']\n",
"# ['BIO2110', 'or', 'BIO2111', 'and', 'BIO2301']\n",
"# ['BIO2110', 'or', 'BIO2111', 'and', 'BIO4010', 'or', 'BIO4011']\n",
"# ['CHE1091', 'and', 'CHE3260', 'or', 'CHM2002']\n",
"# ['BME3260', 'or', 'CHE3260', 'and', 'AEE3083', 'or', 'BME3081', 'and', 'MEE2024', 'or', 'CHE4568']\n",
"# ['MTH1002', 'or', 'MTH1020', 'and', 'BME3260', 'or', 'CHE3260', 'or', 'CSE2410', 'or', 'ECE3551']\n",
"# ['CSE1001', 'and', 'MTH2201', 'or', 'MTH3200']\n",
"# ['CSE1001', 'and', 'MTH2201', 'or', 'MTH3200']\n",
"\n",
"# In all of these, it seems like OR precedes AND (in order of ops.)\n",
"# We'll assume that's the case across the board.\n",
"\n",
"# First, we'll concatenate everything that is already grouped, so we can treat it as one element.\n",
"def concat_paren_groups(list_in):\n",
" ls = []\n",
" # Just gonna roll with a simple FSM here.\n",
" in_parens = False\n",
" paren_cat = []\n",
" for x in list_in:\n",
" if in_parens:\n",
" if x == ')':\n",
" in_parens = False\n",
" # This is how I handle nested parenthesis. Since this function is already aggregating\n",
" # everything in the parens, it might as well also send it back to the correction function.\n",
" # It's sort of superfluous since nested parens never happen in the data, though...\n",
" corrected = '~'.join(correct_parens_recur(paren_cat))\n",
" ls.append('(~' + corrected + '~)') # the tilde is a temporary substitute for a space\n",
" paren_cat = []\n",
" else:\n",
" paren_cat.append(x)\n",
" elif x == '(':\n",
" in_parens = True\n",
" else:\n",
" ls.append(x)\n",
" return ls\n",
"\n",
"# Next, we need to identify where to add parenthesis.\n",
"\n",
"# Yes, it does call itself, but it does it through concat_paren_groups()\n",
"def correct_parens_recur(list_in):\n",
" # loop backwards\n",
" ls = list_in.copy()\n",
" # deal with any existing grouping\n",
" if any('(' in x for x in ls):\n",
" ls = concat_paren_groups(ls)\n",
" state='nul'\n",
" state_ever_changed = False\n",
" last_change = len(ls)-1\n",
" for i in range(len(ls)-1, 0, -1):\n",
" \n",
" # if we find a different operator at our current position\n",
" if ls[i] != state and (ls[i]=='and' or ls[i]=='or'):\n",
" # ... and it's not the first real state we find\n",
" if not state == 'nul':\n",
" # insert parenthesis around any OR set that is on the same hierarchical level as an AND\n",
" if state == 'or':\n",
" ls.insert(last_change+2, ')')\n",
" ls.insert(i+1, '(')\n",
" last_change = i\n",
" state_ever_changed = True\n",
" \n",
" state = ls[i]\n",
" \n",
" # throw the parens in if we reached the end of the list and the conditions are right\n",
" if state_ever_changed and state == 'or':\n",
" ls.insert(last_change+2, ')')\n",
" ls.insert(0, '(')\n",
" \n",
" return ls\n",
"\n",
"# anything not a course, operator, or parenthesis returns false\n",
"def valid_course_data(x):\n",
" return bool(re.search('\\(|\\)|or|and|[A-Z]{3}[0-9]{4}', x))\n",
"\n",
"# changes \"( MTH1000 or )\" to \"( MTH1000 )\"\n",
"# takes a list of strings as input though, not a string\n",
"def remove_degenerate_operators(list_in):\n",
" list_out = []\n",
" was_op = False\n",
" for x in list_in:\n",
" list_out.append(x)\n",
" if was_op and x == ')':\n",
" del list_out[-2]\n",
" was_op = ('or' in x.lower() or 'and' in x.lower())\n",
" return list_out\n",
"\n",
"# changes \"( ( ( MTH1000 ) ) ) and MTH1001\" to \"MTH1000 and MTH1001\"\n",
"# takes a list of strings as input though, not a string\n",
"def remove_degenerate_parens(list_in):\n",
" list_out = []\n",
" list_tmp = list_in.copy()\n",
" \n",
" # loop so we get all the nested parenthesis\n",
" # there is a way to get them in one pass (keep track of depth), but I don't feel like writing it\n",
" has_degen = True\n",
" while has_degen:\n",
" cells_since_Lparen = 0\n",
" to_add = []\n",
" list_out = []\n",
" has_degen = False\n",
" for x in list_tmp:\n",
" to_add.append(x)\n",
" if x == '(':\n",
" cells_since_Lparen = 0\n",
" elif x == ')':\n",
" # If we see a ')' 2 spaces after a '(', then the parenthesis only enclose\n",
" # one value, so we can delete them.\n",
" if cells_since_Lparen == 2:\n",
" del to_add[-3] # crazy how this works in Python\n",
" del to_add[-1]\n",
" has_degen = True\n",
" list_out.extend(to_add)\n",
" to_add = []\n",
" cells_since_Lparen += 1\n",
" list_out.extend(to_add)\n",
" list_tmp = list_out.copy()\n",
" return list_out\n",
"\n",
"# wrapper function for correct_parens_recur\n",
"# also removes any non-course data and fixes some formatting issues\n",
"def fix_formatting(list_in):\n",
" # remove degenerates\n",
" ls = remove_degenerate_operators(list_in)\n",
" ls = remove_degenerate_parens(ls)\n",
" \n",
" ls = correct_parens_recur(ls)\n",
" \n",
" # remove the whitespace placeholders ('~'), and any zero-width space characters ('\\u200b')\n",
" ls = re.sub('\\u200b', '', re.sub('~', ' ', ' '.join(ls))).split(' ')\n",
" \n",
" # remove any 'background knowledge' or similar non-parsable entries\n",
" ls = list(filter(valid_course_data, ls))\n",
" # remove any trailing operators as a result of the previous filter\n",
" if len(ls) > 0 and (ls[0] == 'and' or ls[0] == 'or'): del ls[0]\n",
" if len(ls) > 0 and (ls[-1] == 'and' or ls[-1] == 'or'): del ls[-1]\n",
" \n",
" # get rid of operators and do another paren pass\n",
" ls = remove_degenerate_operators(ls)\n",
" ls = remove_degenerate_parens(ls)\n",
" \n",
" return ls"
]
},
{
"cell_type": "markdown",
"id": "7c78ba98",
"metadata": {},
"source": [
"#### Algorithm Time\n",
"\n",
"The purpose of this function is to create a small NetworkX structure that can be overlayed on our complete network, or a subset of it. It needs to generate PDAGs from a logical expression in infix notation. This expression may contain nested parenthesis, but its operators are limited to `and` and `or`. This task is somewhat analogous to that of an operator precedence parser."
]
},
{
"cell_type": "code",
"execution_count": 39,
"id": "a9e2813e",
"metadata": {},
"outputs": [],
"source": [
"# We'll use a recursive function\n",
"def parse_conns_recur(ls, root_node_name, edge_label='???'):\n",
" DG = nx.DiGraph()\n",
" node_type = 'NULL'\n",
" if len(ls) == 0:\n",
" DG.add_node(root_node_name)\n",
" return DG\n",
" if len(ls) == 1:\n",
" # this case is only triggered if the root node has no logic\n",
" DG.add_node(ls[0])\n",
" DG.add_edge(root_node_name, ls[0], title=edge_label)\n",
" else:\n",
" in_parens = 0\n",
" paren_cat = []\n",
" to_add = []\n",
" to_add_parens = []\n",
" for x in ls:\n",
" if x == '(':\n",
" in_parens += 1\n",
" \n",
" if in_parens > 0:\n",
" paren_cat.append(x)\n",
" \n",
" else:\n",
" if len(x) == 7:\n",
" to_add.append(x)\n",
" else:\n",
" node_type = x\n",
" \n",
" if x == ')':\n",
" in_parens -= 1\n",
" if in_parens == 0:\n",
" to_add_parens.append(paren_cat[1:-1])\n",
" paren_cat = []\n",
" if len(to_add) > 0 or len(to_add_parens) > 0:\n",
" DG.add_node(root_node_name)\n",
" this_node_name = ' '.join(ls)\n",
" DG.add_node(this_node_name, label=node_type.upper())\n",
" DG.add_edge(root_node_name, this_node_name, title=edge_label)\n",
" if len(to_add) > 0:\n",
" for x in to_add:\n",
" DG.add_node(x)\n",
" DG.add_edge(this_node_name, x, title=edge_label)\n",
" if len(to_add_parens) > 0:\n",
" for x in to_add_parens:\n",
" SG = parse_conns_recur(x, this_node_name, edge_label)\n",
" DG = nx.compose(DG, SG)\n",
" return DG\n",
"\n",
"# Another wrapper to correct the parenthesis in our input list\n",
"def get_PDAG(ls, root_node_name, edge_label):\n",
" filt_list = fix_formatting(ls)\n",
" DG = parse_conns_recur(filt_list, root_node_name, edge_label)\n",
" return DG"
]
},
{
"cell_type": "markdown",
"id": "fe7ff7dd",
"metadata": {},
"source": [
"#### Displaying the Network (again)\n",
"\n",
"We'll display it like how we did before, but a little neater this time."
]
},
{
"cell_type": "code",
"execution_count": 40,
"id": "aa6ce9df",
"metadata": {},
"outputs": [],
"source": [
"# SG for... uh... supergraph?\n",
"SG = nx.DiGraph()\n",
"\n",
"# this can take a few seconds\n",
"for x in CL:\n",
" DG2 = get_PDAG(x.prerequisites, x.course_id, 'prerequisite')\n",
" SG = nx.compose(SG, DG2)\n",
" # complements_courses has no logical operators, it's just a list of courses, so we won't pass it to parse_conns"
]
},
{
"cell_type": "code",
"execution_count": 41,
"id": "89d53576",
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"dept_list = unique_depts(CL)\n",
"# It's been a while since I touched Python. I forgot how fun one-liners like these are.\n",
"dept_color_map = dict(zip(dept_list, distinctipy.get_colors(len(dept_list), pastel_factor=0.85)))\n",
"\n",
"def disp_PDAG(pdag_graph, remove_isolates=True):\n",
" gn=Network(height=900, width=1400, notebook=False, directed=True)\n",
"\n",
" for x in pdag_graph.nodes():\n",
" pdag_graph.nodes[x]['shape'] = 'box'\n",
" if 'label' in pdag_graph.nodes[x]:\n",
" if pdag_graph.nodes[x]['label'] == 'OR':\n",
" pdag_graph.nodes[x]['label'] = ' '\n",
" pdag_graph.nodes[x]['color'] = '#ffaaaa'\n",
" pdag_graph.nodes[x]['shape'] = 'triangleDown'\n",
" if pdag_graph.nodes[x]['label'] == 'AND':\n",
" pdag_graph.nodes[x]['label'] = ' '\n",
" pdag_graph.nodes[x]['color'] = '#aaffaa'\n",
" pdag_graph.nodes[x]['shape'] = 'triangle'\n",
" else:\n",
" code = x[0:3]\n",
" if code in dept_color_map:\n",
" pdag_graph.nodes[x]['color'] = col_to_hex(dept_color_map[code])\n",
"\n",
" pdag_graph.remove_nodes_from(list(nx.isolates(pdag_graph)))\n",
" gn.force_atlas_2based()\n",
" gn.set_edge_smooth('continuous')\n",
" gn.toggle_physics(False)\n",
" gn.from_nx(pdag_graph)\n",
" gn.show_buttons('physics')\n",
" gn.show(\"grph_pdag.html\")\n",
" \n",
"disp_PDAG(SG)"
]
},
{
"cell_type": "markdown",
"id": "641b2d17",
"metadata": {},
"source": [
"### Second Impressions\n",
"\n",
"NOTE: If you don't want to bother installing the libraries required to run this notebook, you can view the network [here](grph_pdag.html). Be sure to enable physics.\n",
"\n",
"The network (this time of just prerequisites) has roughly the same large-scale structure:\n",
"\n",
"
\n",
"\n",
"But if we zoom in, we can see there's a lot more going on at small scales.\n",
"\n",
"
\n",
"\n",
"The red, downward-facing triangles represent an `OR` operator, and the green upward-facing triangles represent an `AND` operator. With this, the network is now able to fully represent the logic in the course catalog data."
]
},
{
"cell_type": "markdown",
"id": "0386597d",
"metadata": {},
"source": [
"## Reformatting the CAPP Report\n",
"\n",
"Now that we've constructed our course graph, we have the data necessary to make sense of the CAPP report."
]
},
{
"cell_type": "markdown",
"id": "a952557e",
"metadata": {},
"source": [
"### Extracting the Report\n",
"\n",
"PAWS can display CAPP reports in three formats: Detailed, General, or Additional Information. I'll select the \"detailed\" version:\n",
"\n",
"
\n",
"\n",
"And I'll manually download the .html page by hitting `ctrl+s` in my browser.\n",
"\n",
"
\n",
"\n",
"The data on these pages is stored in tables, so we should probably keep it in that format. Let's try using `pandas.read_html()` to get the information:"
]
},
{
"cell_type": "code",
"execution_count": 66,
"id": "9d2b5f7b",
"metadata": {
"scrolled": true
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"0 Yes\n",
"1 NaN\n",
"2 NaN\n",
"3 COM\n",
"4 NaN\n",
"5 1101\n",
"6 NaN\n",
"7 202008\n",
"8 COM\n",
"9 1101\n",
"10 Composition and Rhetoric\n",
"11 NaN\n",
"12 3.000\n",
"13 T\n",
"14 T\n",
"15 NaN\n",
"Name: 2, dtype: object\n"
]
}
],
"source": [
"# read_html gives us a list of dataframes. The indices are as follows:\n",
"# [0] - nothing useful\n",
"# [1] - overall credit requirements\n",
"# [2] - program information\n",
"# [3:-3] - actual course reqs.\n",
"# [-3] - in-progress courses (MAY NOT APPEAR)\n",
"# [-2] - courses not used (MAY NOT APPEAR)\n",
"# [-1] - letter key (H - History, etc.)\n",
"CAPP_data = pd.read_html('webpages/CAPPpages/detailed 2.html')\n",
"print(CAPP_data[3].iloc[2, :]) # try printing a row"
]
},
{
"cell_type": "markdown",
"id": "4550f30e",
"metadata": {},
"source": [
"Wow, that worked!\n",
"\n",
"This doesn't usually happen!\n",
"\n",
"Let's start crunching it."
]
},
{
"cell_type": "code",
"execution_count": 43,
"id": "51e52ff1",
"metadata": {},
"outputs": [],
"source": [
"credit_info = CAPP_data[1]\n",
"program_info = CAPP_data[2]\n",
"course_info = CAPP_data[3:]\n",
"# Saves all data to .csv for debugging\n",
"# Excel or google sheets is great for looking at all the info\n",
"pd.concat([credit_info, program_info, *course_info]).to_csv('webtest.csv', encoding='utf-8',sep='\\t')"
]
},
{
"cell_type": "code",
"execution_count": 44,
"id": "70956e30",
"metadata": {},
"outputs": [],
"source": [
"# checks if any strings in a dataframe contain a value\n",
"# mild TODO: vectorize this somehow\n",
"def is_in_df(df, my_str):\n",
" for ri, row in df.iterrows():\n",
" for ci, val in row.items():\n",
" if my_str in str(val):\n",
" return True\n",
" return False\n",
"\n",
"req_data = []\n",
"other_data = []\n",
"for table in course_info:\n",
" # strip out all whitespace\n",
" table = table.apply(lambda x: x.str.strip() if x.dtype == \"object\" else x)\n",
" # look a couple code blocks below for a list of all the metadata that dataframes are tagged with\n",
" \n",
" if is_in_df(table, 'R - Currently Registered'):\n",
" # this is the 'Source Code Key' table\n",
" table.attrs['type'] = 'SCC'\n",
" other_data.append(table)\n",
" elif table.shape[1] == 7:\n",
" # this is the 'In-Progress Courses' table\n",
" table.attrs['type'] = 'IPC'\n",
" other_data.append(table)\n",
" elif table.shape[1] == 6:\n",
" # this is the 'Courses Not Used' table\n",
" table.attrs['type'] = 'CNU'\n",
" other_data.append(table)\n",
" elif table.shape[1] == 16:\n",
" # this is one of the actual course tables\n",
" # print('REQUIREMENTS FOUND: ' + str(table.shape[0]-3) + ' condition(s)')\n",
" table.attrs['type'] = 'REQ'\n",
" \n",
" table_title = table.iloc[0, 3]\n",
" \n",
" # save table metadata in dataframe attributes\n",
" req_infos = '-'.join(table_title.split('-')[:-1])\n",
" table.attrs['req id'] = req_infos.split('(')[0].strip()\n",
" table.attrs['credits'] = float(req_infos.split('(')[-1].split('credits')[0])\n",
" table.attrs['met'] = not ('NOT' in table_title.split('-')[-1].upper())\n",
" table.attrs['is semester'] = 'Sem1' in table_title or 'Sem2' in table_title\n",
" if table.attrs['is semester']:\n",
" i = -1\n",
" sm = table.attrs['req id'].lower()\n",
" if 'freshmen' in sm:\n",
" i = 0\n",
" elif 'sophomore' in sm:\n",
" i = 2\n",
" elif 'junior' in sm:\n",
" i = 4\n",
" elif 'senior' in sm:\n",
" i = 6\n",
" else:\n",
" table.attrs['is semester'] = False\n",
" \n",
" if '2' in sm:\n",
" i = i+1\n",
" table.attrs['semester id'] = i\n",
" \n",
" req_data.append(table.iloc[2:-1 , :]) # These rows have nothing useful"
]
},
{
"cell_type": "markdown",
"id": "89d2cbce",
"metadata": {},
"source": [
"We've successfully extracted the metadata from our tables, and removed all junk data. Now we need to actually parse our data. Thankfully, we already have a PDAG parser that takes input in the `['(', 'MTH2001', 'OR', 'MTH2011', ')']` form! We just need to adapt the present information to fit this standard.\n",
"\n",
"Also, we have the handy `fix_formatting()` function from earlier, so we won't have to worry about parenthesis."
]
},
{
"cell_type": "code",
"execution_count": 45,
"id": "ca649efc",
"metadata": {},
"outputs": [],
"source": [
"for i in range(0, len(req_data)):\n",
" df = req_data[i]\n",
" # I'm going to iterate through the dataframes.\n",
" # This is an anti-pattern in Pandas, and generally frowned upon, but I think\n",
" # vectorization would introduce unnecessary complexity into the code.\n",
" \n",
" # Once again, the formatting in the data is inconsistent.\n",
" # There are a few weird cases on my report:\n",
" #\n",
" # 1: Every column says PHY3152/53 or something like that\n",
" # Sometimes it's \"A and B\", other times it's \"A & B\", or \"A/B\".\n",
" # By 'every', I mean in rule/subj/attrib/low/high.\n",
" # This appears to happen for all courses with labs.\n",
" # I'm pretty confident that this wasn't an intentional feature.\n",
" #\n",
" # 2: low/high are actually used (e.g. low:2000, high:4999)\n",
" # Each instance of this case is paired with one of the following cases:\n",
" #\n",
" # 3: Low/high are set to 2XXX and 4XXX, respectively\n",
" # I guess either case 2 or 3 wasn't working, so they put in the other but didn't remove the old one?\n",
" # Anyway, they mean the same thing. Probably safe to include both instances in the network, then\n",
" # remove the duplicate edge? I want to mess with the formatting as little as possible\n",
" #\n",
" # 4: The row does not specify a course range / CRN.\n",
" # This happens with elective requirements and other gen-ed stuff.\n",
" #\n",
" # 5: The row only contains a right parenthesis.\n",
" #\n",
" # THERE IS A HIGH CHANCE THAT ADDITIONAL CAPP REPORTS WILL HAVE CASES I HAVEN'T DOCUMENTED HERE\n",
" \n",
" statement = []\n",
" for i in range(0, df.shape[0]):\n",
" \n",
" met = df.iloc[i, 0]\n",
" cond = df.iloc[i, 1]\n",
" \n",
" rule = df.iloc[i, 2]\n",
" subj = str(df.iloc[i, 3])\n",
" low = str(df.iloc[i, 5])\n",
" high = str(df.iloc[i, 6])\n",
" \n",
" course_id_2 = str(df.iloc[i, 8]) + str(df.iloc[i, 9])\n",
" \n",
" # if the locical operator column is non-empty at this row, add it to the string\n",
" if str(cond).lower()!='nan':\n",
" if ')' in cond:\n",
" statement.append(')')\n",
" if 'and' in cond.lower():\n",
" statement.append('and')\n",
" elif 'or' in cond.lower():\n",
" statement.append('or')\n",
" if '(' in cond:\n",
" statement.append('(')\n",
" \n",
" # ------------------------ CASE HANDLING ------------------------ #\n",
" # (see lengthy comment above this loop for details)\n",
" \n",
" # CASE 5 (rparen)\n",
" if str(met).lower()=='nan':\n",
" pass\n",
" \n",
" # CASE 1 (and)\n",
" elif ('and' in low.lower()) or ('/' in low.lower()) or ('&' in low.lower()):\n",
" # IMPORTANT: This assumes that this case only ever has one departament (no 'MTH1000 and PSY1000')\n",
" # However, it can handle an arbitrary number of 'and's\n",
" CRNs = list(set(re.findall('[0-9][0-9][0-9][0-9]', low + ' - ' + high + ' - ' + subj)))\n",
" dept = list(set(re.findall('[A-Z][A-Z][A-Z]', low + ' - ' + high + ' - ' + subj)))[0]\n",
" \n",
" to_add = ['('] # put everything in a paren block since OOO prioritizes OR over AND\n",
" for i in range(0, len(CRNs)):\n",
" to_add.append(dept + CRNs[i])\n",
" \n",
" # here we fill the courselist's entries with new data we gathered from the CAPP report\n",
" ck = next((c for c in CL if c.course_id == dept + CRNs[i]), None)\n",
" if ck is not None:\n",
" # check if the course was taken / condition met\n",
" va = CL.index(ck)\n",
" CL[va].met |= ('yes' in met.lower())\n",
" CL[va].in_major = True\n",
" # copy the requirement metadata over to every node\n",
" CL[va].req_attrs.append(df.attrs)\n",
" \n",
" if i != len(CRNs) - 1:\n",
" to_add.append('or')\n",
" to_add.append(')')\n",
" statement.extend(to_add)\n",
" \n",
" # CASE 2 (XXXX)\n",
" elif ('x' in low.lower()) or ('x' in high.lower()):\n",
" # find course range by finding maximum and minimum possible course vals by substituting 'x' for 9 and 0\n",
" mina = min(int(low.lower().replace('x', '0')), int(high.lower().replace('x', '0')))\n",
" maxa = max(int(low.lower().replace('x', '9')), int(high.lower().replace('x', '9')))\n",
" \n",
" # iterate through courselist and find any courses in the range [mina, maxa], then add them\n",
" good_courses = []\n",
" for x in CL:\n",
" if x.course_code == subj and x.course_num >= mina and x.course_num <= maxa:\n",
" good_courses.append(x.course_id)\n",
" \n",
" # entry filling like in case 1\n",
" va = CL.index(x)\n",
" CL[va].met |= ('yes' in met.lower() and x.course_id == course_id_2)\n",
" CL[va].in_major = True\n",
" CL[va].req_attrs.append(df.attrs)\n",
" \n",
" for i in range(0, len(good_courses)):\n",
" statement.append(good_courses[i])\n",
" if i != len(good_courses) - 1:\n",
" statement.append('or')\n",
" \n",
" # CASE 3 (low/high)\n",
" elif low.isnumeric() and high.isnumeric():\n",
" good_courses = []\n",
" # iterate through courselist and find any courses in the range [low, high], then add them\n",
" for x in CL:\n",
" if x.course_code == subj and x.course_num >= int(low) and x.course_num <= int(high):\n",
" good_courses.append(x.course_id)\n",
" \n",
" # entry filling like in case 1\n",
" va = CL.index(x)\n",
" CL[va].met |= ('yes' in met.lower() and x.course_id == course_id_2)\n",
" CL[va].in_major = True\n",
" CL[va].req_attrs.append(df.attrs)\n",
" \n",
" # put it all in an 'or' block\n",
" for i in range(0, len(good_courses)):\n",
" statement.append(good_courses[i])\n",
" if i != len(good_courses) - 1:\n",
" statement.append('or')\n",
" \n",
" # CASE 4 (no course range / num specified)\n",
" elif str(rule).lower()!='nan' and not low.isnumeric() and not high.isnumeric():\n",
" pass\n",
" \n",
" # DEFAULT CASE (CRN in low)\n",
" else:\n",
" # just copy the id over\n",
" statement.append(subj + low)\n",
" \n",
" # entry filling like in case 1\n",
" ck = next((c for c in CL if c.course_id == subj + low), None)\n",
" if ck is not None:\n",
" va = CL.index(ck)\n",
" CL[va].met |= ('yes' in met.lower())\n",
" CL[va].in_major = True\n",
" CL[va].req_attrs.append(df.attrs)\n",
" \n",
" statement = fix_formatting(statement)\n",
" df.attrs['statement'] = statement"
]
},
{
"cell_type": "code",
"execution_count": 46,
"id": "ac6cbdc3",
"metadata": {},
"outputs": [],
"source": [
"# ALL DATAFRAME ATTRIBUTES:\n",
"\n",
"# ATTRIBUTE KEY - EXAMPLE - DESCRIPTION\n",
"\n",
"# 'type' - 'SCC'/'IPC'/'CNU'/'REQ' - source code key / in progress courses / courses not used / requirements\n",
"# 'req id' - 'PhysicsBach-MTH-or-CSE' - identifier for the requirement\n",
"# 'credits' - 6.0 - credits needed to fulfill this requirement\n",
"# 'met' - True - has this requirement been met?\n",
"# 'is semester' - True - does this requirement correspond with a semester?\n",
"# 'semester id' - 5 - value from 0-7 indicating which semester this requirement is for\n",
"# 'statement' - ['MTH1001', 'and', 'MTH1000'] - the logic used to determine if the requirement has been fulfilled\n",
" \n",
"#ept_list = unique_depts([x for x in CL if x.in_major])\n",
"#dept_color_map = dict(zip(dept_list, distinctipy.get_colors(len(dept_list), pastel_factor=0.9, colorblind_type='Tritanopia')))\n",
"\n",
"SG_reqs = nx.DiGraph()\n",
"for x in req_data:\n",
" q = get_PDAG(x.attrs['statement'], x.attrs['req id'], 'requirement')\n",
" SG_reqs = nx.compose(SG_reqs, q)\n",
"\n",
"for x in CL:\n",
" if x.in_major:\n",
" DG2 = get_PDAG(x.prerequisites, x.course_id, 'prerequisite')\n",
" SG_reqs = nx.compose(SG_reqs, DG2)\n",
" \n",
"for x in CL:\n",
" if x.in_major:\n",
" DG2 = get_PDAG(x.corequisites, x.course_id, 'corequisite')\n",
" SG_reqs = nx.compose(SG_reqs, DG2)\n",
"\n",
"for x in CL:\n",
" if x.in_major:\n",
" DG2 = get_PDAG(x.recommended, x.course_id, 'recommended')\n",
" SG_reqs = nx.compose(SG_reqs, DG2)\n",
"\n",
"for x in CL:\n",
" if x.in_major:\n",
" DG2 = get_PDAG(x.complements_courses, x.course_id, 'complements')\n",
" SG_reqs = nx.compose(SG_reqs, DG2)\n",
" \n",
"disp_PDAG(SG_reqs)"
]
},
{
"cell_type": "markdown",
"id": "ca791633",
"metadata": {},
"source": [
"### Graduation Requirement Network\n",
"\n",
"That was easier than I expected. My physics major CAPP report has been turned into a network:\n",
"\n",
"
\n",
"\n",
"However, this network isn't very helpful. It is currently:\n",
"- **Hard to read.** The nodes are too small/spread out, and they lack useful information about the courses.\n",
"- **Hard to comprehend.** It might be difficult to understand how the OR/AND triangle nodes work.\n",
"- **Unorganized.** The courses should be arranged by semester in chronological order.\n",
"- **Overwhelming.** So much data is being displayed that it is difficult to grasp any meaningful insights from it.\n",
"\n",
"Pyvis, our current visualization tool, is great for viewing the large-scale structure of a network, but it doesn't support any hierarchical layouts. We'll need a better tool to display these networks."
]
},
{
"cell_type": "markdown",
"id": "ea7cc81f",
"metadata": {},
"source": [
"## Visualization\n",
"\n",
"While working on this project, I've experimented with **Pyvis**, **Networkx**, **Graphviz**, and **Holoviz**/**Holoviews**. Out of the four, **Pyvis** and **Graphviz** created the most effective visualizations with the least hassle.\n",
"\n",
"Still, they're not *easy* to use. Pyvis is essentially a bridge from Python to the larger **vis.js**, a comprehensive browser-based visualization library. The documentation for vis.js is great, but for Pyvis... not so much. Similarly, Graphviz is a CLI tool that I can use here because of a Python interfaces that connect me to the tool. The docs for this interface are also unsatisfactory.\n",
"\n",
"Because of the shoddy documentation and my lower comfort level in Python, I'm going to export the data and write the visualizer in another language.\n",
"\n",
"### Exporting\n",
"\n",
"I could export to JSON through NetworkX, but I feel like cleaning things up a little before exporting."
]
},
{
"cell_type": "code",
"execution_count": 47,
"id": "dd667b66",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Exported!\n"
]
}
],
"source": [
"node_export = {}\n",
"for n in SG_reqs.nodes(data=True): \n",
" node_properties = {\n",
" 'is logic' : len(n[0].strip()) > 7,\n",
" 'is semester' : False,\n",
" 'is operator' : False,\n",
" 'operator' : 'No operator!',\n",
" 'in major' : False,\n",
" 'credits' : 0,\n",
" 'name' : 'No course name provided!',\n",
" 'code' : 'No code provided!',\n",
" 'num' : -1,\n",
" 'description' : 'No description provided!',\n",
" 'met' : False,\n",
" 'tags' : [],\n",
" 'req ids' : [], # ids of parent requirement groups\n",
" 'semesters' : [], # indices of parent requirement semesters\n",
" 'color' : '#FFFFFF'\n",
" }\n",
" \n",
" if not node_properties['is logic']:\n",
" crs = next((c for c in CL if c.course_id == n[0]), None)\n",
" if crs is not None:\n",
" node_properties['in major'] = crs.in_major\n",
" node_properties['credits'] = crs.credit_hours\n",
" node_properties['name'] = crs.course_name\n",
" node_properties['code'] = crs.course_code\n",
" node_properties['num'] = crs.course_num\n",
" node_properties['description'] = crs.description\n",
" node_properties['met'] = crs.met\n",
" node_properties['tags'] = crs.tags\n",
" \n",
" # nodes could belong to multiple semesters\n",
" for x in crs.req_attrs:\n",
" if 'semester id' in x:\n",
" node_properties['semesters'].append(x['semester id'])\n",
" node_properties['req ids'].append(x['req id'])\n",
" node_properties['semesters'] = list(set(node_properties['semesters']))\n",
" else:\n",
" # So, yes, I'm pulling these values from the triangle visualization code from earlier.\n",
" # Probably not the best idea. TODO: Store operator type data elsewhere.\n",
" if n[1]['shape'] == 'box' and ('freshmen' in n[0].lower() or 'junior' in n[0].lower() or\n",
" 'sophomore' in n[0].lower() or 'senior' in n[0].lower()):\n",
" node_properties['is semester'] = True\n",
" if n[1]['shape'] == 'triangle':\n",
" node_properties['operator'] = 'AND'\n",
" node_properties['is operator'] = True\n",
" if n[1]['shape'] == 'triangleDown':\n",
" node_properties['operator'] = 'OR'\n",
" node_properties['is operator'] = True\n",
" \n",
" if node_properties['code'] in dept_color_map:\n",
" node_properties['color'] = col_to_hex(dept_color_map[node_properties['code']])\n",
" \n",
" node_export[n[0]] = node_properties\n",
"\n",
"edge_export = []\n",
"for e in SG_reqs.edges(data=True):\n",
" node_a = e[0]\n",
" node_b = e[1]\n",
" node_type = e[2]['title']\n",
" # very inefficient! very readable... :3\n",
" edg = {'start' : node_a, 'end' : node_b, 'type' : node_type}\n",
" edge_export.append(edg)\n",
"\n",
"# req data is already well-formatted\n",
"reqs_export = []\n",
"for x in req_data:\n",
" reqs_export.append(x.attrs)\n",
"\n",
"# JSON is very easy in Python\n",
"to_export = {'nodes' : node_export, 'edges' : edge_export, 'requirements' : reqs_export}\n",
"with open('visualizer/mynetwork.json', 'w') as outfile:\n",
" json.dump(to_export, outfile)\n",
"\n",
"print('Exported!')"
]
},
{
"cell_type": "markdown",
"id": "efab8a80",
"metadata": {},
"source": [
"## Done!\n",
"\n",
"The tough work, anyway.\n",
"\n",
"The rest of this adventure happens in Javascript. JS is more prototype-OO-ish stuff, so documenting my process like I've done here doesn't really work. Also, the code is a mess since I wrote it in a rush.\n",
"\n",
"Regardless of its quality, the JS program is a browser-based specialized hierarchical-but-also-force-based graph drawing tool. Most of the code is drawing-focused, but a significant portion of it is dedicated to translating the JSON from an edge list into a linked list. It has some quirks, but I love the way it looks, and it's actually helped me think about what courses I need to take!\n",
"\n",
"You can view the end result [here](visualizer/index.html). Thanks for reading!"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.5"
}
},
"nbformat": 4,
"nbformat_minor": 5
}