?

Log in

LiveJournal for rndmcnlly.

View:User Info.
View:Friends.
View:Calendar.
View:Website (adamsmith.as).
View:Memories.
You're looking at the latest 20 entries. Missed some entries? Then simply jump back 20 entries.

Sunday, January 18th, 2009

Subject:userpic
Time:9:10 pm.
I've just update my userpic. The motivation to actually get a livejournal in the first place was to use the silly picture I used to have. I appear to have grown up since then.
Comments: Read 3 orAdd Your Own.

Saturday, November 22nd, 2008

Subject:beat matching mini-game / meditative exercise
Time:6:56 am.
Inspired by the rhythm mechanic in a game prototype created by students at UCSC, I decided to make my own beat matching toy. Try it here: http://adamsmith.as/typ0/k/fralbs_pulse/ -- give it a minute to load the music. Once it is running you can click in the box or press keys while the applet is active to try to match the beat. The audible feedback comes in after 32 beats.

I haven't tested it on more than one computer, so the (critical) timing may be horribly off.
Comments: Add Your Own.

Friday, October 3rd, 2008

Subject:Game Design Metatheory
Time:4:54 pm.
Crawlers, please note that I am linking to the blog for my new research project, game design metatheory. Humans, please note that I might be posting cool demos on that site occasionally -- look up "game design metatheory" a year from now to see how it all came out.
Comments: Read 2 orAdd Your Own.

Wednesday, June 4th, 2008

Subject:timesince.rns
Time:1:59 am.

timesince.rns
Originally uploaded by rndmcnlly
According to the directory listing of my Reason project dump, its been over two years since I uploaded any new music I made to my site. Today this changes. I've just added a new track, forlackofabetternamely named 'timesince'.

A few notes:
- It was just two or three hours from opening Reason to uploading the mp3.
- Normally I don't make the loops, arrange them into a larger piece, then tweak for uploading in a single sitting but it has been so long that I jumped right through the pipeline.
- The notes in each loop were all laid out with the matrix step sequencer (the new sequencer window view is still a bit alien to me), then arranged as blocks after.
- I've explored a lot of new music avenues since I last made a full track, but this was an experiment in working in the old methods, sticking to Reason-only and no external tool integration, livecoded or otherwise.
- As with most of my stuff, I mainly care about the breakdown and add the fairly generic drums mostly as a way to create a higher reference energy level for contrast.
- I'm not happy with the overall arc of development and I forgot to trim out the silence at the end of the mp3, but I've gone and published it now and it takes so long to export this track (which can only play in real-time when settings are turned down).

http://adamsmith.as/music/reason/timesince.mp3
Comments: Add Your Own.

Friday, August 17th, 2007

Subject:Invisble Hand (of Adam Smith)
Time:12:47 am.

Invisble Hand
Originally uploaded by rndmcnlly

Another night of graphics/vision research and development!

Tonight's installment bring us an improvement over existing document camera technology. My system intelligently keeps track of occluding foreground objects and a stable background that only changes incrementally over time to create the effect of writing with an invisible hand.

See the full demo (2MB zip of jpegs): http://adamsmith.as/typ0/ihdemo.zip

Key:
- The bottom left image is read directly from the camera (a crappy webcam taped to the wall in my case).
- The bottom right image is the blurred version of the first image.
- The top right image is the blurred version of the background image from last frame (not shown).
- The bottom middle image is the absolute color difference between the two blurred images
- The top middle is a thresholded version of the difference image.
- The top left is the old background image with areas not occluded by foreground objects copied from the freshly captured image below.

Large blobby object like hands are (ideally) never inserted into the background state so viewers can always see "through" (which is really "back in time") to what was there before the hand moved in the way. The clue that something is a transient occluder is that it touches the border of the image. I have some code in there (not demonstrated) that will check if a foreground region is connected to the border of the image. If it is not, it is immediately imported into the background scene. In this way, if the hand reaches in and places a dark object in the middle of the pages it will instantly pop into visibility as soon as the hand disconnects from it!

Up next, I have to remove those shadow boundaries so my algorithm doesn't get confused by their smooth gradients.

Comments: Add Your Own.

Wednesday, August 15th, 2007

Subject:Selective Emphasis
Time:12:38 am.

Selective Emphasis
Originally uploaded by rndmcnlly
These two images are screenshots from a program I just wrote in Processing. The were taken just a few seconds apart under the same lighting conditions. The dramatic change in perceived lighting is due to a selective emphasis that has been applied automatically, in live, real-time, to images coming from the webcam on top of a modern iMac.

A region of interest is selected by the user by either moving object or the camera to place the interesting region in the center of the image. Given a rudimentary guess of a foreground-background segmentation using a circular lump about the center of the screen, the algorithm begins to repeatedly build a model of color likelihood given a segmentation label (a value between 0 and 255) then relabel each pixel with its most likely label. At the end of each pass the label image is smoothed with a small Gaussian kernel. Passes are synchronized with grabbing of new frames from the camera so, in this way, the label image from the previous frame becomes the prior labels for the next frame, exploiting temporal coherence.

The combined sharing of information across space and time allows the algorithm to track moving regions of interest even under drastic appearance changes. This comes with a trade-off for the region of interest shifting undesirably in some occasions. Though it is uncommon, it is quite possible for the region of interest to become disconnected. In the right image, several distinct blobs are visible on the door.

To create visual emphasis, the areas outside of the region of interest are darkened and blurred slightly.

Source and binary (128k, requires quicktime for camera access): http://adamsmith.as/typ0/sketch_070813a-001.zip
Comments: Read 1 orAdd Your Own.

Monday, August 13th, 2007

Subject:Proceedings of ThinkTub 1.0
Time:1:00 am.
The plain-text record below was written collaboratively by Adam Smith and Jeff Lindsay during an evening session of the first instance of an event self-named, temporarily, ThinkTub. While it may be read in a linear fashion it was expanded and edited non-linearly while being projected on a wall. For the most part, outside resource were not used. Only a dictionary periodically. The bulk of this content was generated on the fly using only itself as context.

*snip*Collapse )
Comments: Add Your Own.

Wednesday, March 28th, 2007

Subject:mashdown
Time:4:43 am.

mashdown
Originally uploaded by rndmcnlly.
So its past 4:00am again. I wondered a for a while what it would be like to have arbitrary digital signal processing interposed in a streaming audio network. In the past I've gotten live synthesis to be streamed out over the network, but tonight I've created something different (and without writing any code).

What I have created is an octave-down pitch shifted live mp3 stream of another stream from someone on the internet. Its not ready to be consumed by the masses (I haven't connected it up to a shoutcast server yet, but this is trivial and only really matters if someone other than me wants to hear it). The particular stream I chose to mangle was Cryosleep, the beatless stream from BlueMars.org. The droning chords they tend to play take on rumbling nature when shifted down an octave. Now, data coming from a shoutcast server like theirs isnt ready to be consumed by any command line tools I know (wget didn't play icy with it). I used StreamRipperX to consume the stream and told it to create a local relay station that luckily wget wouldnt balk at. An instance of wget requested an endless stream of data from http://localhost:18000/ and saved it into a named pipe (created with mkfifo) called some.mp3. I used lame --decode to pipe this data into another named pipe called some.wav. This file was consumed by sox, a self-proclaimed "swiss army knife of sound processing" which highpassed then pitch shifted (preserving tempo) the incoming data and wrote it to yet another pipe, out.wav. I coverted the stream in out.wav to a final pipe, out.mp3 using lame in the forward sense. And finally, after all that processing junk, I had VLC open the file and treat it as a pipe (not all that successfully). Finally, playing through the speakers, I had a pitch shifted version of what the live stream was playing a few seconds ago.

Um, yay for the command line. And, of course, living in a world where my mom's user-friendly desktop computer is also friendly to hackers.
Comments: Read 1 orAdd Your Own.

Tuesday, March 13th, 2007

Time:9:22 pm.
Note to self: "I want to disable ownership."
Comments: Read 4 orAdd Your Own.

Wednesday, February 21st, 2007

Subject:this is just a test
Time:8:39 pm.
no, seriously
Comments: Read 2 orAdd Your Own.

Tuesday, January 23rd, 2007

Subject:jeff blogs to me while i take a nap
Time:11:08 pm.
Progrium: AMSTERDAM
Progrium: andy's gf has a new kind of science and some other cool books
Progrium: on the plane i was trying to formalize my conceptual framework
of thought by building a sort of semantic network around all the things
i like to think about, trying to figure out why i like things so much or
see so much profoundness in things as simple as "models"
Progrium: that way i can write about it and convince people these things
are so great
Progrium: but it's hard because i know these things are part of a system
that i have an idea of abstractly, but in order to communicate it and
fully grok it myself, it's a lot of work
Progrium: or something. i'm going to try and sleep again
Progrium: Progrium signed off at 4:42 pm.
Progrium: Progrium signed on at 4:52 pm.
Progrium: i got up because i had this idea of "memetic genealogy" where
you can trace a behavior or idea back to essentially the inspiring meme
Progrium: i had done it before, but i thought it was more psychological
because i could trace back ideas and behavior of mine back to thigns
when i was little, like a fascination with trucks that led to my
fascination with programming
Progrium: but of course, the truck thing i wasn't born with, it come
from some person, and then in that person it came from somewhere
Progrium: so theoretically you could trace a meme back to some root meme
... but we'd probably not get that far since it's hard enough to get to
the root of actual genealogy and memes seem like they'd be harder to
trace
Progrium: and if you search for google on it
Progrium: you get one mention of "memetic genealogy" but in the use of
the history of the idea of memes
Progrium: and then there's maybe 1 or 2 uses that fit mine
Progrium: one of them being about scientology and their memetic
genealogical relation to mormons and the freemasons
Progrium: on the plane i also came up with the idea of memetic model,
which is to complement the idea of mental model... just because i
remember you saying something about specifying mental in front of model
when "there isn't a model that isn't a mental model" or something like
that
Progrium: and i need to read more about memes to get this totally right
Progrium: but a memetic model is pretty much the use of model that most
of us use, it's the shared model that we have
Progrium: because a mental model is relative, the memetic model is
projected from the mental model
Progrium: it's like the mental model is the local working copy and the
memetic model is the one in the repository that people learn about and
contribute to
Progrium: ...you fail
Progrium: Progrium signed off at 5:24 pm.
Comments: Read 6 orAdd Your Own.

Wednesday, September 27th, 2006

Subject:So much math meat!
Time:2:11 am.
I was poking around on the internet today... and suddenly it all started to make sense, but now I'm feeling a little lost. I ordered myself a textbook on impulse to help me figure it all out. I'm all about taking a cool idea and applying it where people might not expect, and I get this strange feeling that geometric algebra might remain one of those cool-ideas-to-apply in the long run.

And now, to scare you away and draw me further in (because I learned a few of these things from a brief intro I just read), here are amazon.com's statistically improbably phrases: mixed signature spaces, restricted conformal group, spacetime algebra, geometric algebra formulation, multiparticle quantum theory, third bivector, acceleration bivector, quantum inner product, monogenic equation, rotor group, bivector algebra, pure bivector, geometric product, rotor description, anticommuting vectors, spacetime bivectors, conformal space, highest grade element, spherical monogenics, shape tensor, full conformal group, second field equation, rotor equation, directed integral, using geometric algebra. How will I ever sleep now??

In four words: cross products get x'ed.
Comments: Read 2 orAdd Your Own.

Wednesday, August 30th, 2006

Subject:crazy idea with military/intelligence applications
Time:12:13 pm.
I'm not sure how I ended up there, but I found myself read about linear predictive coding of speech just now. My crazy idea is making a system that transforms voice in such a way that the content of the speech is clear but the identity of the speaker is destroyed. The "scramble suits" in A Scanner Darkly performed a similar function, but in a fictional world. For a quick rundown of LPC, let me say that it is a faily simple process that breaks down an incoming audio signal based on the assumption that it is being produced by a buzzer in resonating tube -- not a bad approximation for the vocal chords and the mouth and nose cavities in the human head. LPC has already been used in speech compression as well as creating vocoder-style effects in music and other applications. For each "frame", a window of a few ms, the LPC encoder gives you a handful of coefficients representing the shape of the resonating filter (important for preserving formants) and a base frequency for the buzzer source. In my system these two pieces of data would be regularlized in a way that many speaker's voices would be mapped to the same output. The buzzer frequency for high and low pitched voices would be normalized by finding the current frequency's difference from a moving average. This difference would be applied to a fixed base frequency defined in the algorithm. In this way meaning-conveying pitch variations (rising tone in questions, etc.) would be preserved while obscuring the true pitch of the speaker's voice. The shift between the canonical frequency and the speakers buzzer frequency could be used to shift up or down the formats represented by the filter coefficients to preserver thier relative location without further leaking pitch information. Furthermore, the space of normalized filter coefficients could be segmented into bins that give enough variation for good intelligibility, but collect several speaker's variations in vocalization into the same buckets (although a simple VQ isn't immediately applicable in this space). So far I only have ways to normalize the voice with respect to overall pitch, but there are several other identifying features of a voice that would still be percievable after this process. These include features like accent, pace, vocabulary, grammar. To defeat these as well one would probably have to go to a system that read in a large window of speech, correctly extracted and interpreted the natural language, and resynthesized its meaning with a canonical grammar -- certainly not feasable for real time communication nor even possible with current technology that I know of. My system provides first-line-of-defense against voice identification while only introducing delays on the order of a frame as well as being general enough to apply to several languages without an extensive database -- systems with far greater complexity already exist with hardware implementations inside the average cell phone.

How well does it really work? I have no idea, I just thought of it. Maybe I can string together a minimal prototype in PureData when this fast-paced working lifestyle passes over.

Wait, isn't this hiding stuff just the opposite of what Adam is always talking about? Hmm, very true... Well, the normalized voice would be much more compressible -- youd only need to send the pitch-differential signal (which could be heavily mutilated while still retaining clarity, it'd just sound a little robotic) and an identifier for the filter-coefficient bucket used. Sure its no good for hi-fi music compression, but the application here is encoding a single speakers words and expression, nothing more. Plenty of shortcuts to make!
Comments: Read 3 orAdd Your Own.

Tuesday, August 29th, 2006

Subject:intense optimization!
Time:10:03 pm.
I actually had to break out a pencil and paper to get this one right: full integer-only alpha-blending with only 10 multiplies and single divide.

    // TODO: throw in appropriate rounding values
    cA = fA + bA - ( fA*bA+255 >> 8);
    tR = bA*bR >> 8;
    tG = bA*bG >> 8;
    tB = bA*bB >> 8;
    s  = s_lut[cA]; // 65535 / cA;
    cR = ((fA*(fR-tR) >> 8) + tR) * s >> 8; 
    cG = ((fA*(fG-tG) >> 8) + tG) * s >> 8; 
    cB = ((fA*(fB-tB) >> 8) + tB) * s >> 8; 



**edit: Since s only takes on 256 distinct values I can make a 256B lookup table (called s_lut) and eliminate the divide altogether. yay.
**eeeeedit: with a 64KiB table I can eliminate ALL of the multiplies!!
**eeeeeeeeeeedit: oh what, who put >> there in the precedence table..
Comments: Read 3 orAdd Your Own.

Monday, August 28th, 2006

Subject:RE: Flash Video Player + Bittorrent client = Anybody can be YouTube
Time:8:58 pm.
parents: http://commonchaos.livejournal.com/26486.html

I mean't this actually to be a reply, but it was too large to fit... what else was I to do.

bt, hmmmmmmmmmmm...

In the following discussion I am assuming one is trying to create a rough replacement for YouTube, just without the giant bandwith costs. This means, yes, there is a central server with a repository of all content and client information.

In the traditional usage of bittorrent, moving around teh warez (and maybe legitimate linux cd images), consumers demand error-free transmission of the entire file, with no preference for any specific part of it -- this is built right into the protocol. For moving trashy video content around, 'tube-styles, your consumers have a different set of interests. They want to watch their chunk of data front to back, and many will lose interest after seeing just a small introductory piece and abandon the download. Additionally, in the interest of quick loading, users are willing to put up with some degree of glitchiness and general variability of the content's quality. I claim we need a different kind of p2p protocol to pull off what you are asking -- something that builds in the sequential access aspect.

Futher complication, people like to go on to something else as soon as a video ends. With bittorrent there is that pressure to seed after you have finished, and that is just important here. A way to keep people around without making them too angry would be to have a single client that would play several videos (possibly including the searching tools) so that all of your hard downloading work isn't tossed as soon as you finish watching.

a proposed solutiony like thingCollapse )

This is great material for a whiteboard at devhaus.
Comments: Add Your Own.

Subject:volatile (in the context of multithreaded programming in c/c++/c#)
Time:1:31 pm.
argargrgrgagargggr (choking noises), damn you lack-of-volatile-modifier causing nasty crashbugs that dissapear when you look for them but instantly reappear when you look the other way!!!

Anyone know what I mean?
Comments: Read 8 orAdd Your Own.

Saturday, August 19th, 2006

Subject:my my, what an interesting anthropological find
Time:5:13 am.
I don't often post links to things I just happen to find on the internet, certainly not highschoolers' blogs (let alone eight at a time!). However, consider this find: (well, not just yet)

http://www.laa.lanl.gov/earthwatch/06/

The important content of that site, to me, is the event-by-event record of a two-week youth scientific program thingy. This is the first primary souce I've read myself that has so many individuals documenting the same events (notably, experiences I can usefully compare to my own own).

long explanation of why it is actually interesting to meCollapse )
Comments: Read 3 orAdd Your Own.

Friday, August 18th, 2006

Subject:Re: modern blawgs are evil, repent
Time:7:53 pm.
Parents: http://commonchaos.livejournal.com/25928.html?thread=23880#t23880

Oh woe is this post-comment dichotomy. Imagine a world were there were ONLY comments, but they could have zero, one, or many "parents". Isn't commenting all we are doing here anyway? (not rhetorical! to identify the things we do Other than commenting would be enlightening and probably say a lot about why people do any don't like certain sites for this kinda stuff.)
Comments: Add Your Own.

Subject:DVD Monkey.app
Time:12:16 pm.
They wanted me to stand next to a laptop for a few hours to press play whenever the dvd finished its 5 minute show.... No way, applescript can do this for me.

on idle
	tell application "DVD Player"
		if dvd menu active then
			repeat 5 times
				press down arrow key
			end repeat
			press enter key
		end if
	end tell
	return 5 -- sleep for 5 seconds
end idle
Comments: Read 2 orAdd Your Own.

Thursday, August 17th, 2006

Subject:money laundering
Time:8:17 pm.
I requested, through official channels, to be a TA for this coming quarter. I had heard that "there aren't many slots available" this quarter, but I was sure they would have reserved one for the introductory computer graphics class. After all, this class has a mandatory, separate, 2-credit lab portion. No slots. Maybe all the kids were meant to learn it on thier own using tutorials found on the internet (I know its possible!). A solution was found that involves routing teh cashz0r to me from two separate sources with official descriptions not matching my intentions.

If someone asks "if you are the TA, how come you are giving a class lecture?", I can safely tell reply "what do you mean? I'm not the TA for this class" -- "so what are you?" -- "um, a vigilante, i guess you might say". "This is cool and all, but isn't this class supposed to be about graphics?" -- "mmmmmmmaybe.".

**edit: Officially there is one slot for every 60 students, and an expectation of at the most 30 students in the graphics class. Thus, after one iteration of the long-division-of-decimal-integers algorithm, I find confirm a total of 0 slots to be allocated with a remainder of 30 unguided students. However, this is a university, not elementary school. As such, a received a recent update regarding my status that made use of not only decimals but fractions and percentages as well!
Comments: Read 3 orAdd Your Own.

LiveJournal for rndmcnlly.

View:User Info.
View:Friends.
View:Calendar.
View:Website (adamsmith.as).
View:Memories.
You're looking at the latest 20 entries. Missed some entries? Then simply jump back 20 entries.