CupsInteger16 CUPS 1.2/Mac OS X 10.5 User-defined integer values cupsMarkerType64 CUPS 1.2/Mac OS X 10.5 Ink/toner type cupsMediaType Media type code cupsNumColors CUPS 1.2/Mac OS X 10.5 Number of color compoents cupsPageSizeName64 CUPS 1.2/Mac OS X 10.5 PageSize name cupsPageSize2 CUPS 1.2/Mac OS X 10.5. Magenta is a research project exploring the role of machine learning in the process of creating art and music. Primarily this involves developing new deep learning and reinforcement learning algorithms for generating songs, images, drawings, and other materials. If you looking to play it on your Mac (I recommend that you do!!!) just make sure it’s compatible with your particular machine. My iMac is about 3 yrs. Old, OS 10.9.4 no Retina display, but excellent graphics just the same. I highly recommend getting a gamepad if you don’t already have one, we have the Sony for PS3 and it works great!!!
The CUPS raster API provides a standard interface for reading and writingCUPS raster streams which are used for printing to raster printers. Because theraster format is updated from time to time, it is important to use this API toavoid incompatibilities with newer versions of CUPS.
Two kinds of CUPS filters use the CUPS raster API - raster image processor(RIP) filters such as pstoraster
and cgpdftoraster
(Mac OS X) that produce CUPS raster files and printer driver filters thatconvert CUPS raster files into a format usable by the printer. Printerdriver filters are by far the most common.
CUPS raster files (application/vnd.cups-raster
) consists ofa stream of raster page descriptions produced by one of the RIP filters such aspstoraster, imagetoraster, orcgpdftoraster. CUPS raster files are referred to using thecups_raster_t
type and areopened using the cupsRasterOpen
function. For example, to read raster data from the standard input, openfile descriptor 0:
Each page of data begins with a page dictionary structure calledcups_page_header2_t
. Thisstructure contains the colorspace, bits per color, media size, media type,hardware resolution, and so forth used for the page.
Do not confuse the colorspace in the page header with the PPD ColorModel keyword. ColorModel refers to the general type of color used for a device (Gray, RGB, CMYK, DeviceN) and is often used to select a particular colorspace for the page header along with the associate color profile. The page header colorspace (cupsColorSpace) describes both the type and organization of the color data, for example KCMY (black first) instead of CMYK and RGBA (RGB + alpha) instead of RGB.
You read the page header using thecupsRasterReadHeader2
function:
After the page dictionary comes the page data which is a full-resolution,possibly compressed bitmap representing the page in the printer's outputcolorspace. You read uncompressed raster data using thecupsRasterReadPixels
function. A for
loop is normally used to read the page one lineat a time:
When you are done reading the raster data, call thecupsRasterClose
function to freethe memory used to read the raster file:
Close a raster stream.
void cupsRasterClose (
cups_raster_t *r
);
The file descriptor associated with the raster stream must be closedseparately as needed.
Interpret PPD commands to create a page header.
int cupsRasterInterpretPPD (
cups_page_header2_t *h,
ppd_file_t *ppd,
int num_options,
cups_option_t *options,
cups_interpret_cb_t func
);
NULL
for none)0 on success, -1 on failure
This function is used by raster image processing (RIP) filters likecgpdftoraster and imagetoraster when writing CUPS raster data for a page.It is not used by raster printer driver filters which only read CUPSraster data.cupsRasterInterpretPPD
does not mark the options in the PPD usingthe 'num_options' and 'options' arguments. Instead, mark the options withcupsMarkOptions
and ppdMarkOption
prior to calling it -this allows for per-page options without manipulating the options array.
The 'func' argument specifies an optional callback function that iscalled prior to the computation of the final raster data. The functioncan make changes to the cups_page_header2_t
data as needed to use asupported raster format and then returns 0 on success and -1 if therequested attributes cannot be supported.cupsRasterInterpretPPD
supports a subset of the PostScript language.Currently only the [
, ]
, <<
, >>
, {
,}
, cleartomark
, copy
, dup
, index
,pop
, roll
, setpagedevice
, and stopped
operatorsare supported.
Open a raster stream.
cups_raster_t *cupsRasterOpen (
int fd,
cups_mode_t mode
);
CUPS_RASTER_READ
, CUPS_RASTER_WRITE
, or CUPS_RASTER_WRITE_COMPRESSED
New stream
This function associates a raster stream with the given file descriptor.For most printer driver filters, 'fd' will be 0 (stdin). For most rasterimage processor (RIP) filters that generate raster data, 'fd' will be 1(stdout).
When writing raster data, the CUPS_RASTER_WRITE
orCUPS_RASTER_WRITE_COMPRESS
mode can be used - compressed outputis generally 25-50% smaller but adds a 100-300% execution time overhead.
Read a raster page header and store it in aversion 1 page header structure.
unsigned cupsRasterReadHeader (
cups_raster_t *r,
cups_page_header_t *h
);
1 on success, 0 on failure/end-of-file
This function is deprecated. Use cupsRasterReadHeader2
instead.
Version 1 page headers were used in CUPS 1.0 and 1.1 and contain a subsetof the version 2 page header data. This function handles reading version 2page headers and copying only the version 1 data into the provided buffer.
Read a raster page header and store it in aversion 2 page header structure.
unsigned cupsRasterReadHeader2 (
cups_raster_t *r,
cups_page_header2_t *h
);
1 on success, 0 on failure/end-of-file
Read raster pixels.
unsigned cupsRasterReadPixels (
cups_raster_t *r,
unsigned char *p,
unsigned len
);
Number of bytes read
For best performance, filters should read one or more whole lines.The 'cupsBytesPerLine' value from the page header can be used to allocatethe line buffer and as the number of bytes to read.
Write a raster page header from a version 1 pageheader structure.
unsigned cupsRasterWriteHeader (
cups_raster_t *r,
cups_page_header_t *h
);
1 on success, 0 on failure
This function is deprecated. Use cupsRasterWriteHeader2
instead.
Write a raster page header from a version 2page header structure.
unsigned cupsRasterWriteHeader2 (
cups_raster_t *r,
cups_page_header2_t *h
);
1 on success, 0 on failure
The page header can be initialized using cupsRasterInterpretPPD
.
Write raster pixels.
unsigned cupsRasterWritePixels (
cups_raster_t *r,
unsigned char *p,
unsigned len
);
Number of bytes written
For best performance, filters should write one or more whole lines.The 'cupsBytesPerLine' value from the page header can be used to allocatethe line buffer and as the number of bytes to write.
AdvanceMedia attribute values
typedef enum cups_adv_e cups_adv_t;
Boolean type
typedef enum cups_bool_e cups_bool_t;
cupsColorSpace attribute values
typedef enum cups_cspace_e cups_cspace_t;
CutMedia attribute values
typedef enum cups_cut_e cups_cut_t;
LeadingEdge attribute values
typedef enum cups_edge_e cups_edge_t;
cupsRasterInterpretPPD callback function
typedef int (*cups_interpret_cb_t)(cups_page_header2_t *header, int preferred_bits);
Jog attribute values
typedef enum cups_jog_e cups_jog_t;
cupsRasterOpen modes
typedef enum cups_mode_e cups_mode_t;
cupsColorOrder attribute values
typedef enum cups_order_e cups_order_t;
Orientation attribute values
typedef enum cups_orient_e cups_orient_t;
Version 2 page header
typedef struct cups_page_header2_s cups_page_header2_t;
Version 1 page header
typedef struct cups_page_header_s cups_page_header_t;
Raster stream data
typedef struct _cups_raster_s cups_raster_t;
Version 2 page header
struct cups_page_header2_s {
unsigned AdvanceDistance;
cups_adv_t AdvanceMedia;
cups_bool_t Collate;
cups_cut_t CutMedia;
cups_bool_t Duplex;
unsigned HWResolution[2];
unsigned ImagingBoundingBox[4];
cups_bool_t InsertSheet;
cups_jog_t Jog;
cups_edge_t LeadingEdge;
cups_bool_t ManualFeed;
unsigned Margins[2];
char MediaClass[64];
char MediaColor[64];
unsigned MediaPosition;
char MediaType[64];
unsigned MediaWeight;
cups_bool_t MirrorPrint;
cups_bool_t NegativePrint;
unsigned NumCopies;
cups_orient_t Orientation;
cups_bool_t OutputFaceUp;
char OutputType[64];
unsigned PageSize[2];
cups_bool_t Separations;
cups_bool_t TraySwitch;
cups_bool_t Tumble;
unsigned cupsBitsPerColor;
unsigned cupsBitsPerPixel;
float cupsBorderlessScalingFactor;
unsigned cupsBytesPerLine;
cups_order_t cupsColorOrder;
cups_cspace_t cupsColorSpace;
unsigned cupsCompression;
unsigned cupsHeight;
float cupsImagingBBox[4];
unsigned cupsInteger[16];
char cupsMarkerType[64];
unsigned cupsMediaType;
unsigned cupsNumColors;
char cupsPageSizeName[64];
float cupsPageSize[2];
float cupsReal[16];
char cupsRenderingIntent[64];
unsigned cupsRowCount;
unsigned cupsRowFeed;
unsigned cupsRowStep;
char cupsString[16][64];
unsigned cupsWidth;
};
cups_adv_t
)cups_cut_t
)cups_jog_t
)cups_edge_t
)cups_orient_t
)Version 1 page header
struct cups_page_header_s {
unsigned AdvanceDistance;
cups_adv_t AdvanceMedia;
cups_bool_t Collate;
cups_cut_t CutMedia;
cups_bool_t Duplex;
unsigned HWResolution[2];
unsigned ImagingBoundingBox[4];
cups_bool_t InsertSheet;
cups_jog_t Jog;
cups_edge_t LeadingEdge;
cups_bool_t ManualFeed;
unsigned Margins[2];
char MediaClass[64];
char MediaColor[64];
unsigned MediaPosition;
char MediaType[64];
unsigned MediaWeight;
cups_bool_t MirrorPrint;
cups_bool_t NegativePrint;
unsigned NumCopies;
cups_orient_t Orientation;
cups_bool_t OutputFaceUp;
char OutputType[64];
unsigned PageSize[2];
cups_bool_t Separations;
cups_bool_t TraySwitch;
cups_bool_t Tumble;
unsigned cupsBitsPerColor;
unsigned cupsBitsPerPixel;
unsigned cupsBytesPerLine;
cups_order_t cupsColorOrder;
cups_cspace_t cupsColorSpace;
unsigned cupsCompression;
unsigned cupsHeight;
unsigned cupsMediaType;
unsigned cupsRowCount;
unsigned cupsRowFeed;
unsigned cupsRowStep;
unsigned cupsWidth;
};
cups_adv_t
)cups_cut_t
)cups_jog_t
)cups_edge_t
)cups_orient_t
)AdvanceMedia attribute values
Boolean type
cupsColorSpace attribute values
CutMedia attribute values
LeadingEdge attribute values
Jog attribute values
cupsRasterOpen modes
cupsColorOrder attribute values
Orientation attribute values
Machine Learning is all the rage these days, and with open source frameworks like TensorFlow developers have access to a range of APIs for using machine learning in their projects. Magenta, a Python library built by the TensorFlow team, makes it easier to process music and image data in particular.
Since I started learning how to code, one of the things that has always fascinated me was the concept of computers artificially creating music. I even published a paper talking about it in an undergrad research journal my freshman year of college.
Let's walk through the basics of setting up Magenta and programmatically generating some simple melodies in MIDI file format.
First we need to install Magenta, which can be done using pip. Make sure you create a virtual environment before installing. I am using Python 3.6.5, but Magenta is compatible with both Python 2 and 3.
Run the following command to install Magenta in your virtual environment, it's a pretty big library with a good amount of dependencies so it might take a bit of time:
Alternatively, if you want to install Magenta globally you can use the following shell commands to run an install script created by the Magenta team to simplify things:
This will give you access to both the Magenta and TensorFlow Python modules for development, as well as scripts to work with all of the models that Magenta has available. For this post, we're going to be using the Melody recurrent neural network model.
Rather than training our own model, let's use one of the pre-trained melody models provided by the TensorFlow team.
First, download this file, which is a .mag bundle file for a recurrent neural network that has been trained on thousands of MIDI files. We're going to use this as a starting point to generate some melodies. Save it to the current directory you are working in.
When generating a melody, we have to provide a priming melody. This can be a MIDI file using the --prime_midi
flag, or a format that Magenta uses which is a string representation of a Python list using the --prime_melody
flag. Let's create some melodies using middle C as the starting note, which in this format would be '[60]'
.
Each melody will be 8 measures in length, corresponding to the --num_steps
flag. This refers to how many 16th step durations the generated tune will be.
With your virtual environment activated, run the following command, making sure to replace /path/to/basic_rnn.mag
with an actual path to the .mag file you just downloaded:
This should output 10 MIDI files in the directory /tmp/melody_rnn/generated
, or whichever directory you want using the --output_dir
flag. It will take some time to execute, so be patient!
Navigate to the output directory, and try playing the MIDI files to see what kind of music you just created! If you are on Mac OS X, the GarageBand program can play MIDI files.
Here's an example of a melody that was generated when I ran this code:
Those melodies are cool for the novelty of a machine composing music, but to me it still sounds mostly like a bunch of random notes. The Magenta team provides two other pre-trained models we can use to generate melodies that have more structure.
The previous model worked by generating notes one by one, only keeping track of the most recent note. That's why a lot of the melodies sound all over the place. Among other things, this Lookback RNN keeps track of the most recent two bars, so it is able to add more repetition into the music.
Download the Lookback RNN model, and save it to the same directory you saved basic_rnn.mag
.
Let's generate some melodies using the Lookback RNN, remembering to replace /path/to/lookback_rnn.mag
with an actual path to the .mag file you downloaded:
You will likely notice that the melodies you generate with this one have a lot more repetition. Here's one of the ones I got:
Now let's try out the Attention RNN. Instead of just looking back at the last two measures, this one is designed to give more long term structure in generated compositions. You can read about the algorithm in this blog post.
Again download the model, and save it to the right directory, and then run the following:
For this example, we are generating melodies that are twice as long. One of the melodies I generated seems to even have a structure that could be repeated. To me, this one sounds like it could be turned into a long form song, complete with different sections that flow into each other.
In the previous examples we have only used a single note, middle C, as the priming melody. But it's much more interesting to create music from an already existing melody that a human wrote. Let's use a MIDI file with a more complex melody that was written by a human to create a musical collaboration between man and machine.
I'm a guitarist and would like to hear a computer shred. So for this example I'm going to use the guitar solo from Omens of Love by the Japanese fusion band T-Square. We'll use the first four measures of this solo, which provide a nice melodic start, and see if we can generate four more measures to complement it.
Download this MIDI file containing a 4-bar melody, and save it to a directory of your choice.
Now use whichever model you want from the previous sections to create a computational jam session! I am going to use the Attention RNN because I liked some of the results I got before:
You might have to generate a ton of output melodies to get something that sounds human, but out of the 10 I generated, this one works really nicely!
We were able to generate some simple melodies with some pre-trained neural network models, and that's awesome! We're already off to a great start when it comes to using machines to create music.
It's a lot of fun to feed different MIDIs to a neural network model and see what comes out. Magenta offers a whole variety of models to work with, and in this post we've only covered the first steps to working with the Melody RNN model.
Keep an eye out for future Twilio blog posts on working with music data using Magenta, including how to train your own models.
Feel free to reach out for any questions or to show off any cool artificial creativity related projects you build or find out about: