Python, Pandas, and Lunch Balances

This is going to be a somewhat high level overview of how we get lunch status and balance information from our food service software to our SIS.  I’m not going to cover all the tiny details because if you’re confident in dealing with raw data and ftp servers you’re already advanced enough to know and the process will vary depending on your software stack.  In our case we use Nutrikids which exports a file through what they call a ‘Call Agent Export’ that is uploaded to our communication vendor for lunch balance notifications.  It’s setup as a CSV file with only a few columns/fields:


Since this data was already being created in a file each day I figured we could import this into our SIS and we would have the latest lunch status and also the current balance so it would show in the app for parents.  The output file looks something like this:




In order to import this via an AutoComm into our system we clean a few things up and only use the data we absolutely need, which in this case means just student number, lunch status, and balance.  First off, our SIS already knows the name and school attached to a student number so we don’t need or want to update those.  Also, if a student is on free lunch status we will import a balance of zero.  The easiest way I found to do this is with Python, Pandas, and NumPy.  I won’t walk through the steps to install it on your system because it’s pretty simple and I’m not a Python expert by any means.  Once it’s installed we can create a little script that will run each day and format things like we need:

from pandas import DataFrame, read_csv

import pandas as pd

import numpy as np

import pysftp

file = r’C:\NutrikidsExport\POSCallAgent.txt’

headerNames = [‘student_number’, ‘status’, ‘balance1’]

df = pd.read_csv(file, header=None, usecols=[0,4,5], names=headerNames)

df[‘status’].replace(to_replace=[‘Paid’], value=’P’, inplace=True)

df[‘status’].replace(to_replace=[‘Reduced’], value=’R’, inplace=True)

df[‘status’].replace(to_replace=[‘Free’], value=’F’, inplace=True)

df[‘balance1’] = np.where(df[‘status’]==’F’,0.00, df[‘balance1’])

df.to_csv(‘nk_ps_out.csv’, encoding=’utf-8′, index=False)

cnopts = pysftp.CnOpts()

cnopts.hostkeys = None

srv = pysftp.Connection(host=””, username=”xxxx”, password=”xxxx”, cnopts=cnopts)




We start off by importing our libraries to read a csv and parse it with Pandas and NumPy.  Next, we set the variable named ‘file’ to be the path to the file exported from NutriKids.  After that we will create an array containing the column header names for our output file.  Finally, we read the file into a data frame by passing in the filename and the columns we want to import.  In this case we are only importing the (zero indexed) 1st, 5th, and 6th columns: student number, lunch status, and balance and putting the header names we specified on them.

Once that’s done, we have the data loaded and labeled but our SIS really just wants the lunch status to be 1 character so let’s go through the status column and replace the instances of ‘Paid’, ‘Reduced’, and ‘Free’ with just ‘P’, ‘R’, or ‘F‘.

After that, let’s search the balance1 column and find anyone with a ‘F’ status and just zero out their balance1 value.  This is where Pandas and NumPy really save the day, we tell it to search the data frame for any ‘F’ in the ‘status’ column and replace the ‘balance1’ with ‘0.00’.

Now we are ready to write the formatted data out to a CSV file.  It’s worth noting that in NutriKids we could specify this export to only pull the needed fields but you might be working with a program that doesn’t always give you that option so this is a pretty gentle exercise in reformatting data.  Taking a peek at the new file should look something like this now:




At this point it depends on where you’re running this script, if you’re already on your SFTP server and have the file written to the spot you’re going to pick it up from you can stop.  If however, this is running on a separate server and you need to upload it to an SFTP server for your SIS to pick up you can use this little bit of code.

The virtual machine this script runs on is used exclusively for these types of processes so I just dropped the script on the desktop and used Windows Task Scheduler to schedule the task to run after the export from the lunch line each day:



Now, back in our SIS we setup an AutoComm which grabs a file from an SFTP server and updates the corresponding student records:


While this post wasn’t a total step by step I hope it serves as a gentle introduction to using Python and Pandas for moving and formatting data between systems.


Python, Pandas, and Lunch Balances

Plugin Export with PS Data Export Manager (DEM)

If you have been on PowerSchool long you probably have several things that communicate via AutoSend or something else that sends data via FTP as a csv file.  Now you can write your SQL in a plugin and schedule it with the Data Export Manager.  It certainly feels more modern, gives you more flexibility because you can write your own SQL, and if you’re a vendor it’s a much easier for clients to install and update.

First, write a simple plugin – we have a CSV file that is exported to our student id badge printing software.  It’s an easy plugin to write but something that kept tripping me up was the permissions_root folder and the <permission_mappings> – here’s the one for this example:


Now, the bit that kept causing me grief was the permission name tag…. Anyone who has a security role which allows them to access the page referenced in <permission name> can access this query in the DEM.  So in this case anyone with import/export page access.  The only other tag is the name of the query which is defined in the named_queries file:

<query name=”com.neosho.product.students.badge_export” coreTable=”Students” flattened=”false”>
<summary>Badge System Export</summary>
<description>Badge System Export</description>
<column column=”STUDENTS.last_name”>Last_Name</column>
<column column=”STUDENTS.first_name”>First_Name</column>
<column column=”STUDENTS.grade_level”>Grade_Level</column>
<column column=”STUDENTS.home_room”>Home_Room</column>
<column column=”STUDENTS.student_number”>Student_Number</column>
Last_Name, First_Name, Grade_Level, Home_Room, Student_Number
Grade_Level >= 5
Enroll_Status = 0

Once you have the plugin installed, navigate to the DEM screen and select it:



Then pick your file name and delimiter settings and click the ‘Save Template’ button at the bottom right:



Now just name your template and enter a description then click ‘Save as New’:


You can now schedule the export, if you’re self hosted you can save it locally but if you’re hosted by PowerSchool you will need to have setup your FTP server in the Remote Connection Manager.


You can click the play button to force it to run, or click the pencil icon to edit the settings or the dash to remove it:



The plugin code is available on GitHub.



Plugin Export with PS Data Export Manager (DEM)

PRIDE Report

Once grades are stored for a progress report or grade card our high school uses data such as number of referrals, attendance percentage, and grade counts to determine their CAT Time/reward level for the next few weeks.  The report is written in sqlReports 4 and the result looks like this:


In order to run the report you must enter the date range for attendance and discipline referral counting and the store code for grades:


And the SQL code, note things like the log.logtypeid and adjust as needed.  Also you can see where ‘Neosho High School’ is hard-coded as the school name – this can be adjusted or removed depending on how you want to handle students who transfer in.  As it is, the report will only count stored grades from Neosho High School.  You could add this as a parameter on the report but to keep things simple for our end users I just put it right in the SQL.

The report is also available on Github

WITH tot AS (select 
    when pad.total_minutes is null then pmd.potential_minutes_present
    else pad.total_minutes
    end)/60,2) act_min,
  round(sum(pmd.potential_minutes_present)/60,2) pot_min, 
  round(sum(pmd.potential_minutes_present)/60 - sum(
    when pad.total_minutes is null then pmd.potential_minutes_present
    else pad.total_minutes
    end)/60,2) min_diff,
  count(*) memb
from ps_membership_defaults pmd
left outer join PS_ATTENDANCE_DAILY pad
on pmd.studentid=pad.studentid
and pmd.calendardate=pad.att_Date
where pmd.calendardate >= '%param1%' and pmd.calendardate <= '%param2%'
group by pmd.studentid, pmd.schoolid, pmd.GRADE_LEVEL),

sch_memb as (
    max(tot.memb) max_memb
  from tot
  group by schoolid

stu_odr as (
  select st_id,
   count(*) as tot
  on = log.studentid
   log.logtypeid = -100000
   log.entry_date between '%param1%' and '%param2%'
   students.schoolid = ~(curschoolid)
  group by

stu_grades as (
   SUM(case when grade like 'A%' then 1 else 0 end) Agrade,
   SUM(case when grade like 'B%' then 1 else 0 end) Bgrade,
   SUM(case when grade like 'C%' then 1 else 0 end) Cgrade,
   SUM(case when grade like 'D%' then 1 else 0 end) Dgrade,
   SUM(case when grade like 'F%' then 1 else 0 end) Fgrade
   storecode = '%param3%' 
   datestored > '%param1%'
   schoolname = 'Neosho High School'
  group by

TotalPossible as (
    Select studentid, 
    sum(attendancevalue) as present,
    sum(membershipvalue)-sum(attendancevalue) as absent,
    sum(membershipvalue) as TotalPossible

    From PS_AdaAdm_Defaults_All 

    Where attendancevalue is not NULL and calendardate>='%param1%' and calendardate<='%param2%'
    Group by studentid

SELECT students.student_number, students.lastfirst, students.grade_level, '%param3%', stu_grades.Agrade, stu_grades.Bgrade, stu_grades.Cgrade, stu_grades.Dgrade, stu_grades.Fgrade, round((tot.act_min / tot.pot_min) * 100,2), nvl(to_number(stu_odr.tot), 0), 
(case when to_number(stu_grades.Fgrade, '99') = 0 and to_number(stu_grades.Dgrade, '99') = 0 and to_number(stu_grades.Cgrade, '99') = 0 and round((tot.act_min / tot.pot_min) * 100,2) >= %param4% and NVL(to_number(stu_odr.tot), 0) = 0 then 'Gold'
          when to_number(stu_odr.tot, '99') >= 1 then 'Bronze' 
          when to_number(stu_grades.Fgrade, '99') = 0 and to_number(stu_grades.Dgrade, '99') = 0 and round((tot.act_min / tot.pot_min) * 100,2) >= %param5% and NVL(to_number(stu_odr.tot), 0) = 0 then 'Silver'
          else 'Bronze' end), TotalPossible.absent
FROM students 
JOIN tot
ON = tot.studentid
LEFT OUTER JOIN stu_grades
ON stu_grades.studentid =
ON stu_odr.st_id = stu_grades.studentid
LEFT JOIN TotalPossible
ON TotalPossible.studentid =
WHERE students.schoolid = ~(curschoolid)
AND tot.pot_min > 0
AND students.enroll_status = 0



PRIDE Report

Of Closures and Delegates


If you’ve been around Cocoa and Objective-C for any amount of time you are quite familiar with the delegate pattern.  You have also noticed that Swift also allows you to implement this same pattern.  And maybe when you were implementing a delegate pattern for a single-method protocol thought there must be a shorter way…
Don’t get me wrong, things like UITableViewController, AVFoundation, progress updates and the like that have several calls are well served by delegate methods.  But what about something that only does one thing like maybe a modal view controller that prompts for a username?  In these brief instances I like the clarity of providing a trailing closure in the presenting view controller:

func showModalView() {

modalView.onDataEntered = { [unowned self]

userName in

self.greetUser(userName: userName)





Of course closures can come with some memory management/ownership stuff as the [unowned self] reminds you and depending on the implementation you could go with [weak self] if there was a possibility of the caller not being there.  But for small, one-off situations I like having this option and the succinct implementation in the modal controller is at least in my opinion easy to follow:

class SomeModalView:UIViewController {

var userName:String?

var onDataEntered:((_ userName:String) -> ())?


func submitTapped() {

guard let uname = userName else { return }




Delegate methods are great when you have several different callbacks and make it clear in your code what is happening (tableViewCell selected, etc.).  As with most things in Swift there are lots of ways to create the same functionality and in the end a lot of this comes down to preference, but for just a single action I like the simplicity of being able to say “just do this one thing when this happens” without having to create a protocol for it (don’t tell Crusty).
Of Closures and Delegates

Standards Assignment Scores

This year our school district went with standards based grading for grades K-6 and one of the requests was that parents/students wanted to be able to view individual assignments and their standards score for a class.  After a lot of looking around I was able to put together a query which seems to work, it JOINs on several tables but it is still quite performant in practice and with the eventual transition to PowerTeacher Pro and its new tables optimization is not high on the list.

We are still using traditional grades for our junior high and high school so I added a conditional to the scores.html page, if the schoolid is 150 or less (our high school is 100 and the jr high is 150 and all the elementary schools and middle school are higher) it shows as usual:

….<!—normal scores page—>
Otherwise it shows the results of the query in a table, which uses the current studentid, sectionid, and schoolid:
<table border=”0″ cellpadding=”0″ cellspacing=”0″ align=”center” width=”99%”  class=”tableToGrid”>
                <th data-sortorder=”asc” width=”10%”>Date</th>
                <th width=”25%”>Assignment</th>
                <th width=”40%”>Standard Description</th>
                <th width=”20%”>Standard ID</th>
                <th width=”10%” data-sorttype=”attrInt”>Score</th>
                 To_Char(sa.dateassignmentdue, ‘MM/DD/YYYY’),
                 PSM_AssignmentStandardScore ss
                 PSM_AssignmentStandard ast
                 ss.assignmentstandardid =
                 PSM_Standard st
                 ast.standardid =
                 PSM_SectionEnrollment se
                 ss.sectionenrollmentid =
                 PSM_Section sct
                 se.sectionid =
                 PSM_School sch
                 sct.schoolid =
                 PSM_SchoolCourse schcrs
                 sct.SchoolCourseID =
                 PSM_SectionAssignment sa
                 ast.sectionassignmentid =
                 sa.assignmentid =
        = se.StudentID
                 Students.Student_Number = PSM_Student.StudentIdentifier
                 PSM_Term term
                 sct.TermID = term.ID
                 sch.schoolnumber = ~(curschoolid)
        = ~(curstudid)
                 se.DateEnrolled < To_Date(To_Char(SYSDATE, ‘MM/DD/YYYY’), ‘MM/DD/YYYY’)
                 se.DateLeft >  To_Date(To_Char(SYSDATE, ‘MM/DD/YYYY’), ‘MM/DD/YYYY’)
        = (SELECT SectionID FROM SYNC_SectionMap WHERE SectionsDCID = ~(sectionid))
                ORDER BY
                 sa.dateassignmentdue, schcrs.SchoolCourseTitle, st.identifier]
And produces this output, which provides parents and students the info we were looking for in a sortable table:
One thing that bothered me was that it’s not very intuitive to click the double dash ‘–‘ to view the scores so I customized the /wildcards/att_quicklookmeet_guardian.txt with a conditional to show at link to the scores.html page for SBG assignment detail:
~[if#schoolcheck.~(curschoolid)>150]<br><a href=”/guardian/scores.html?frn=004[ccid]”>View SBG Assignments</a>[/if#schoolcheck]
Maybe there’s a completely better way of doing this but with the conditionals I didn’t have to actually remove or modify the original page other than adding to it.  If you know of a better way please let me know.  And if you know of a way to get something like this working with PTP tables please share that too!
Standards Assignment Scores

Linear Regression with Swift

As a programmer it’s hard to keep up with all the areas of technology so you end up picking a few and trying to dig down into those while trying to maintain at least a cursory knowledge of as many other technologies as possible.  Since the buzz over machine learning and artificial intelligence has only been increasing I decided to take a jump in the shallow end.  There seems to be two ways into it: the first one being concepts and calculus, the other is grabbing an existing framework and embracing the ‘black box’ approach.  Without knowing too much I started with a feed forward neural network Swift library, it was fun to play with but I quickly ran into not being able to do much as my knowledge of the internals limited my implementation.  Then Apple announced that the Accelerate and Metal frameworks would support features for convolutional neural networks in iOS 10 and I decided it was time to dig in.

After looking around I found Andrew Ng’s Coursera class comes highly recommended.  It’s about a ten week course and the consensus was that you didn’t need to be a practicing mathematician to take it.  I enrolled and after having finished the class I decided to try and write down ‘my understanding’ on some of the concepts.  I find that writing about something helps me solidify understanding and if it helps someone else (either understand or feel better about your own understanding) then that’s always fantastic.  And if you see a way to optimize something please let me know – no programmer wants to do things the hard way.

Many of the machine learning algorithms solve ‘classification’ problems, something like feeding in a picture and the algorithm identifying if it contains a person, car, etc.  We aren’t going to start there though, we are going to start with linear regression or something more akin to an algorithm that takes known data and makes a prediction based on a linear trend such as the commonly used example of home values over time which usually have a trend of increasing.  While this is certainly the shallow end of the pool, the concepts such as cost function are commonly recurring in machine learning.

Let’s say we have some data that looks like this:

1 2.67
2 2.76
3 2.69
4 2.82
5 2.94
6 2.99

And if we plot it we get something like this:


What we want to do is predict what the value of Y would be when X is 8.  Since this is called linear regression our prediction will be linear and look like the blue line:


To draw that line we will use the intercept and slope and we’ll call that theta.  Theta will therefore have two components (intercept/slope or rise/run) which we will call theta zero and theta one.  What we want to do is try different values of theta zero and theta one to draw lots of ‘hypothetical’ lines and calculate the ‘cost’ of that line.  For example if we drew a line with a theta of 1,1 and our prediction looks like this (blue line is theta, green is our training data aka what we know):


The ‘cost’ is the difference between our prediction and the actual data, which could be visualized at each data point something like this:


We will add up all those costs to calculate how well that line fits our data.  To calculate the cost of a single point, the formula is: ((θ0 + θ1 * X) – Y) The formula that calculates the cost is conveniently referred to as the cost function.

You’ll often hear that we want to ‘minimize’ the cost function.  What that means is we want to find the cheapest line, aka the one that best fits the data.  In programmer terms we could calculate a bunch of possibilities and save their costs to an array, then find the lowest cost value in the array and use that as the best fit.  But how do we know which direction to adjust our line to best fit the data?  That’s where gradient descent comes in, think of it as a function that you give your data to and then it finds the ‘way downhill’.

We will provide our gradient descent function with an alpha which is called the ‘learning rate’ which will help us move incrementally toward the lowest cost and we will also tell it how many iterations we would like to perform.  Obviously the fewer iterations the faster it will run but the less accurate our result will be.  In this limited example you could tell it to run 500 times or 5000 without any real noticeable performance impact but you could imagine this might not be the case if you have tens of thousands of training data.

If you want to see it work, your gradient descent function can start off with an empty array (or vector) which will be the same size as the number of iterations to store our cost history for each time through the loop.  For every iteration of the function we could add our new cost data to the array.

Since the gradient descent formula automatically moves ‘downhill’ our ending value of theta will be the cheapest line we could calculate.  It’s worth noting however that this naive implementation is susceptible to something called local minima, which means gradient descent could get caught in a ‘valley’ that’s not actually the cheapest line but it sees no way but up and gets stuck.

For something of a more concrete example, I grabbed some user data from Nissan Leaf owners about battery capacity loss.  We will use the x axis to represent number of miles driven and the y axis to represent number of capacity bars lost.  In fairness this isn’t meant to be an accurate prediction of battery capacity, there’s a lot more to consider such as climate/temperature, age of battery, etc but for simplicity we aren’t going to get into multivariate calculations here.

We will put the mileage into an array called ‘milesDriven’ and the number of capacity bars lost into another array called ‘barsLost’.  Now if you look at that range of values in milesDriven it covers quite a distance from 0 to almost 130,000.  This brings us to something called feature scaling/normalization which is basically saying make all the values fall between zero and one.  The mean value should be 0.5 and the highest value be 1.0 and of course the lowest value becomes 0.0:

x = (x – min(x)) / max(x) – min(x)

The map function is perfect for succintly applying this to the entire array.  Next we will normalize the barsLost data for consistency, and set the iterations and learningRate (alpha) values.  These are values that you should experiment with to see the changes it makes to the overall output.  Likewise for predictionMiles, set it to your target miles; for example put in 23000 to have the algorithm calculate capacity loss for 23000 miles.  Obviously, we normalize this value to match our arrays.

The JHistory array is created and initialized so we can visualize the cost history as the algorithm runs.  We init it with nil values otherwise it might look like an actual calculated value in the history and throw us off.  The two main functions are pretty much explanatory, the predictedBarsLost takes an intercept and slope and calculates the predicted capacity loss.  While the findBestFit function takes our (normalized) data arrays as input along with the number of iterations and alpha/learning rate.  Starting with a line of 0,0 (intercept and slope) it then runs for the number of iterations and for each iteration it adjusts the intercept and slope and calculates the cost and adds that to the History array.  Ultimately it returns an intercept and slope tuple fit to the data.

We can then take the output intercept and slope to predict capacity loss at a given mileage and convert those back to regular values.  Something to watch is the JHistory array as it runs, it should decrease:


Playgrounds are perfect for this type of thing – try playing with the iterations and learningRate values to see how they impact performance.

Some great posts about gradient descent and linear regression:


Linear Regression with Swift

PowerSchool API with Swift

Update: I have updated the wrapper for Swift 3 and will make a new post with it.

During the day I work in PowerSchool and during the evening I’ve been working in Swift, the two have almost never crossed paths until I was reading the PowerSchool API and PowerQuery docs.  PowerSchool’s API lets you connect via a plugin which is basically some xml files that include sql queries.  The user installs the plugin on the server and enables it which generates a ClientID and Secret which your client side software can use to request authorization to connect and execute those queries (OAuth).

The first step is to create your plugin which will contain the queries to retrieve the data you wish to access.  In this case a plugin is just a couple of xml files zipped into one archvie: one file named plugin.xml in the root of the folder and another file in the ‘queries_root’ folder which will contain the actual queries.  Here’s a simple plugin.xml file:

<?xml version="1.0" encoding="UTF-8"?>

<plugin xmlns=""


xsi:schemaLocation=" plugin.xsd"

name="YourCompany Named Query"


description=“YourCompany Named Query">



<field table="STUDENTS" field="FIRST_NAME" access="ViewOnly" />

<field table="STUDENTS" field="LAST_NAME" access="ViewOnly" />

<field table="STUDENTS" field="GRADE_LEVEL" access="ViewOnly" />

<field table="STUDENTS" field="ENROLL_STATUS" access="ViewOnly" />

<field table="STUDENTS" field="ID" access="ViewOnly" />



<publisher name=“Publisher">

<contact email=“"/>



It’s pretty self explanatory but the most interesting part is between the <access_request> tags where you specify which field(s) your plugin would like to access as well as the type of access (ViewOnly in this case).  When the plugin is installed on the server the user will be shown a list of which fields your plugin will access and they can then approve/enable access.  Keep in mind you must list any field you want to use from your query here – even if you’re just searching on a value and not actually displaying it such as the ENROLL_STATUS field in this example.


Now, let’s look at the named_queries file:


<query name="com.companyName.product.students.student_search" coreTable="" flattened="true">

<description>Search by API</description>


<arg name="lastname" type="primitive" required="true" />

<arg name="gradelevel" type="primitive" required="true" />

<arg name="enrollstat" type="primitive" required="true" />















from students

where last_name = :lastname

and grade_level = :gradelevel

and enroll_status = :enrollstat

order by last_name





As you can quickly see, this is where the actual query takes place, starting with the <query name> and <args> type where you specify the name and fields required for your query.  The <column> tags line up with the output of your query.  Note that the query itself in the <sql> tag and then wrapped in a CDATA to preserve it.  Of course you can put multiple queries in the file by using the <query> tag.  Variables/arguments are accessed in the query by a colon followed by the name specified in the <arg> tags:


For example the argument:

<arg name="lastname" type="primitive" required="true" />

Is accessed by the query (<sql> tags):

where last_name = :last name

The <query name> will be the address for how you access the query, with the format: for example:

Now, zip the files up and on the server go to: System -> System Settings -> Plugin Management Configuration, click the ‘Install’ button.  Once the plugin is installed you can enable it with the checkbox.  When you enable it, the user will be prompted to approve access to the fields.

Once it’s enabled, click on the name of the plugin and the ‘Data Configuration’ link at the bottom of the screen to get your Client ID and Secret.

Meanwhile….in Xcode….

Now for the fun part, actually accessing the data from Swift – this is where Xcode Playgrounds really shine – being able to quickly prototype your network/API code.  Here we will make a wrapper class which will help with the Oauth/token dance.  This wrapper/playground is meant to get up and running and to explore the API, it’s not ready to just drop-in for production use.

Start by creating our Base64 encoded JSON of our ClientID and Secret, these values are set in the class here but in production you would most certainly pull these values from a server somewhere in case they change.  Now we can create a mutable URL request so we can modify the HTTP headers to specify the Authorization and Content-Type as well as specifying our HTTP body with grant_type=client_credentials.

Once we have all those pieces we are ready to create our NSURLSession with the request, we use the completion closure of the dataTaskWithRequest method to try and unwrap our JSON response with the access_token/bearerToken.  Remember, NSURLSession tasks are in a suspended state when they are created so call task.resume() to actually kick it off.

That was kind of a pain just to get a token so the server will even talk to us – and that code is not going to change, so that method takes a completion closure with the type: (String)->().  To make composing the requests and parsing responses easier we will create the getResponse function which actually takes the URL of your query and the body/parameters (in JSON format of course) as well as…you guessed it, a closure with the type: (NSData, NSURLResponse, NSError?) -> () which not so coincidentally matches up with the return closure from an NSURLSession.dataTaskWithRequest:

This function first gets the access/bearer token from our first method and then creates a request using that token to the URL you specified with the parameters (in JSON).  The resulting data is passed to the closure you pass into getResponse.  It sounds messy but once you realize it’s really just two HTTP POSTs chained together you see it’s pretty simple.  And in once those methods are completed you can make a request by simply calling the .getResponse method with your closure.  If you make the wrapper a singleton you can simply call the shared instance and pass in your data and closure.

Singleton tip, from:

static let sharedInstance = PowerSchoolServiceWrapper()
private init() { }

Now we will access the ‘Search by API’ query we wrote so we must specify a last name, grade level and enrollment status (0 is active) and pass a closure which parses the responding JSON and appends it to our array of <Student> structs:

var studentSelection:[Student] = []

var lastNameSearch = “StudentLastName"
var gradeLevelSearch = 12

var searchString = "{ \"gradelevel\": \(gradeLevelSearch), \"enrollstat\": 0, \"lastname\": \"\(lastNameSearch)\" }”

    payload: searchString,
    parser: { (data, response, error) -> () in

        do {
            let results = try NSJSONSerialization.JSONObjectWithData(data, options: .AllowFragments) as? [String:AnyObject]

            if let records = results!["record"] as? [[String:AnyObject]] {
                for rec in records {
                    if let fname = rec["first_name"] as? String, let lname = rec["last_name"] as? String, let id = rec["id"] as? String {
                        var newStudent = Student()
                        newStudent.lastName = lname
                        newStudent.firstName = fname
                        newStudent.dbID = id
        } catch {
            print("parsing error")


Here’s the full gist of the wrapper so you can copy/paste into an Xcode Playground to get started with all the Swifty API goodness:

PowerSchool API with Swift