Tuesday, November 30, 2010

Create your own Monster characters using your iPhone and Monster Studio

Introducing Monster Studio-recently appeared product on the Apple App Store. Monster Studio developed by Appdicted is a perfect app giving children and adults ability to create custom monsters.
The app contains over 100 monster parts, bodies, eyes, mouths, accessories to choose from and 21 backgrounds. Besides, you can also use your own photo as the background. Take a picture with the camera on your iPhone or iPod touch and then add a monster to that photo. You are able to turn yourself into a monster by uploading you photo. All graphics are high quality and produce monsters worthy of movies and comic books.
When you’ll accomplish your creation it can be shared on Facebook and Twitter.

Features offered in this fun tool:
 * Works on iPhone, iPod touch and iPad
* HD Retina Display Graphics
 * Share via Twitter and Facebook
 * Over 100 fun monster parts and Accessories (and growing)
 * Add multiple monsters and parts to a single photo
 * Make a monster or add monster parts to your photos
 * Adjust size, rotation and placement of the monster parts
 * Adjust transparency levels of parts (make a see through monster)
 * Take a new picture or load one from the album
 * Use one of 21 pre-made backgrounds or use your own photo as the background

Device Requirements:
 * iPhone 3G, 3GS, 4 and iPod touch 3, 4, iPad
 * Requires iOS 3.1 or later
 * 10.5 MB

Pricing and Availability:
Monster Studio 1.1 is priced at only $0.99 (USD) and available worldwide exclusively through the App Store in the Entertainment category. 

For more information click here

Monday, November 29, 2010

Drawing Part of a UIImage

Happy Thanksgiving, and sorry for the relative lack of posts here lately. Things are crazier than ever and it's been a challenge finding time to shower, let alone blog.

I do have something to share, today, though. No, it's not the next chapter of OpenGL ES 2.0 for iOS. It's a category that some of you may find useful: a method that allows you to draw only part of a UIImage rather than the entire thing.

On the Mac, NSImage has a handy instance method called drawInRect:fromRect:operation:fraction: that lets you specify exactly which part of an image to draw. On UIImage, we've only got the ability to draw the entire image either unless we drop down to Core Graphics calls. We don't have a nice, easy, convenient way using just UIImage to draw a portion of the image it represents.

I needed this ability in an application I'm working on, so I hacked out the following category. At first glance, this may look to be inefficient, since we're making a copy of the instance's backing CGImage in order to create the sub-image, however I believe that CGImageCreateWithImageInRect() references the original image's bitmap data. I haven't confirmed that it doesn't make a copy of the bitmap data, but the documentation certainly seems to imply it. Anyone know?

Anyway, here is the category; I've even commented the code more pedantically than is normal for me in case anyone might be confused about what's going on. Improvements and bug fixes are, as always, welcome.

@implementation UIImage(MCDrawSubImage)
- (void)drawInRect:(CGRect)drawRect fromRect:(CGRect)fromRect blendMode:(CGBlendMode)blendMode alpha:(CGFloat)alpha
{
CGImageRef drawImage = CGImageCreateWithImageInRect(self.CGImage, fromRect);
if (drawImage != NULL)
{
CGContextRef context = UIGraphicsGetCurrentContext();

// Push current graphics state so we can restore later
CGContextSaveGState(context);

// Set the alpha and blend based on passed in settings
CGContextSetBlendMode(context, blendMode);
CGContextSetAlpha(context, alpha);

// Take care of Y-axis inversion problem by translating the context on the y axis
CGContextTranslateCTM(context, 0, drawRect.origin.y + fromRect.size.height);

// Scaling -1.0 on y-axis to flip
CGContextScaleCTM(context, 1.0, -1.0);

// Then accommodate the translate by adjusting the draw rect
drawRect.origin.y = 0.0f;

// Draw the image
CGContextDrawImage(context, drawRect, drawImage);

// Clean up memory and restore previous state
CGImageRelease(drawImage);

// Restore previous graphics state to what it was before we tweaked it
CGContextRestoreGState(context);
}

}

@end

Countdown to Christmas - Holiday Puzzles Released for the iPad

Countdown to Christmas - Holiday Puzzles is a new and first app released by Twigsbury for the iPad users.
Countdown to Christmas - Holiday Puzzles is a challenging puzzle game for kids and adults as well. It includes custom recordings of 12 classic holiday melodies and illustrations by the Kersten Brothers Studio.
To open rib-tickling picture puzzles you will stroll through Santa's Village. When the puzzles are completed they become interactive scenes with animations and sounds that bring them to life.
Countdown to Christmas - Holiday Puzzles keeps kids immersed in puzzles and let them have a great fun. Besides, the app gives ability to change the difficulty level of each puzzle for making the play interesting for kids of all ages.

Device Requirements:
 * Compatible with iPad
 * Requires iOS 3.2 or later
 * 107 MB

Pricing and Availability:
Countdown to Christmas - Holiday Puzzles 1.0 is $1.99 USD and available worldwide exclusively through the App Store in the Entertainment category.

For more information click here

Friday, November 26, 2010

Learn more about your iPhone contacts with the RememberMe? Quiz

Announcing the availability of fast-paced quiz game-Remember Me? on the iTunes App Store. Developed by PhoApps ApS, “Remember Me?” is the app in which questions are generated from the user's own iPhone / iPod touch address book contacts.
The quiz is time-based and players can choose between three difficulty levels that affect the type of questions presented throughout the game as well as time available.
The idea behind the game is that the player should have fun and at the same time learn more about their contacts.
Remember Me? includes recognizing photos, fake contacts, names, addresses, work details, birthdays and more.

Main features:
 * Quiz questions are generated from the user's own contact library
 * Three difficulty levels: Easy, Normal, and Insane
 * Gameplay includes recognizing photos, fake contacts, names, addresses, work details, birthdays and more
 * High score
 * Optimized for Retina display
 * Supports multitasking in iOS 4 (app switching)

Device Requirements:
 * iPhone, or iPod touch compatible
 * Requires iOS 4.0 or later
 * A minimum of 10 address book contacts
 * 2.9 MB

Pricing and Availability:
RememberMe? Quiz 1.0.2 is $0.99 or equivalent amount in other currencies and available worldwide exclusively through the iTunes App Store in the Games category.

For more information click here

Thursday, November 25, 2010

Cogniflame updates Flingy for iPhone and iPod touch - Now With Jets

An independent developer Cogniflame Pty Ltd has released new game for iPhone, iPad and iPod touch. Flingy v1.2 is the developers latest game providing Apple users with challenging physics-based action and lots of upwards speed.
Flingy the robot has fallen from his home in the sky! Propelled by his jetpack, Flingy must swing around spheres to get back to his beloved Mrs Flingy - but he's clumsy and needs you to control his grapple! Can you help save Flingy?
Simply tap spheres to grapple to them and tap anywhere to disconnect. Use Flingy's momentum and his jetpack to shoot him higher and higher, but be sure not to lose your momentum or run out of fuel!

Latest Version Highlights:
 * User-controlled jetpack
 * Fuel pickups and gauge
 * New flingy bonuses!
 * Random 'mystery' spheres
 * Now features high-score tables for both height and score
 * Novel physics-based gameplay
 * Rockets, timed spheres, and more
 * Multiple levels with continuous art
 * Original music for each level
 * Help guide with video and images
 * Post scores to Facebook and Twitter
 * OpenFeint enabled
 * Help guide with video and images
 * Feedback, bonus boost, and bonus points for good flings
 * Momentum-dial to help get you into the 'great fling' groove

Device Requirements:
 * iPhone, iPod touch, and iPad
 * Requires iOS 3.0 or later (iOS 4.0 Tested)
 * 16.0 MB

Pricing and Availability:
Designed for Apple's iPhone and iPod touch, Flingy 1.2 is only $0.99 USD and available worldwide exclusively through the App Store in the Games category.

For more information click here

Wednesday, November 24, 2010

Perfect Web Browser 2.0 - Revolutionary iPad Web Browser Available Now

Announcing the release of Perfect Web Browser 2.0 for iPad developed by Ingenious Creations.
Perfect Web Browser brings the best of web and the simplicity in one unified package. It is most advanced feature-packed web browser today. Perfect Web Browser offers to the users Desktop-Class web on their iPad including fallowing features: REAL-Tabs, One Handed Convenient Scrollbar, TV Video Output, Desktop Browser Web Rendering, Auto Scroll, Offline Saved Pages, Private Mode, Multi-Touch Gestures, Fast Tab Switching, In-Page Search, Font Size Adjustment, Web Compression, Fullscreen & more.

Available Today, users can enjoy the following new features just in time for the holiday season.

Major New Features:
 * One Tap Bookmark Syncing from any major Mac/PC web browser
 * Swipe through Web Pages like Photos for intuitive navigation
 * Background Streaming Audio
 * Multitasking
 * "Open In..." integration for Saved Webpages / PDFs
* Performance Enhancements
* Major optimization for reduced memory usage
* Redesigned UI
 * Memory Alert Notification

Device Requirements:
 * Compatible with iPad
 * Requires iOS 3.2 or later
 * 0.4 MB

Pricing and Availability:
Perfect Web Browser 2.0 is $2.99 USD and available worldwide exclusively through the App Store in the Productivity category.

For more information click here

Tuesday, November 23, 2010

E-Folk 1.0 released - Guitar for beginners

Introducing David Palmerio an independent software developer creator of E-Folk 1.0. E-Folk 1.0 is just released app for iPhone, iPad and iPod touch devices that offers to learn to play acoustic guitar. The app is acoustic guitar method for beginners and it does not require the musical knowledge for users. E-Folk  provides with learning to play in less then two months and include 11 lessons. The musical notation and theory is accompanied by audio and movie recordings.

Part 1 - Basics
 * Lesson 1: Hold your guitar, understand tablatures and play your first Boogie
 * Lesson 2: Right hand fingering for arpeggios study
 * Lesson 3: What is a chord?
 * Lesson 4: Play with a pick on chord progression

Part 2 - Chords
 * Lesson 1: The Folk rhythm
 * Lesson 2: Three beats rhythm
 * Lesson 3: Arpeggios on chord progression
 * Lesson 4: Palm mute.

Part 3 - Barred Chords
 * Lesson 1: Minor barred chords
 * Lesson 2: The F barred chord position
 * Lesson 3: The C barred chord position

Feature Highlights:
 * Automatic runs of the tabs in landscape display
 * Resizables tabs in portrait mode
 * Soundings strings to tune your guitar

Supported Languages:
 * US English and French

Device Requirements:
 * iPhone, iPod touch, and iPad
 * Requires iOS 3.1.2 or later (iOS 4.0 Tested)
 * 14.0 MB

Pricing and Availability:
Pet Tricks 1.0 is $3.99 USD and available worldwide exclusively through the App Store in the Music category.

For more information click here

Friday, November 19, 2010

OpenGL ES Course on iTunes University

Those of you waiting for the next chapter of OpenGL ES 2.0 for iOS can do yourselves a favor by checking out this course on iTunes University. It's an Advanced iOS Development course taught by Brad Larsson at the Madison Area Technical College, and the most recent lesson is on OpenGL ES. You can also find the course notes here.

Full Fat announces Flick Golf for iPhone and iPad

Full Fat Games is announcing the upcoming release of their golfing game-Flick Golf for Apple consumers. Flick Golf is coming to the App Store this November.
The gameplay includes several beautifully rendered 3D environments and unique ball controls. It features 2 main game modes Quickshot and World Tour. Each mode contains their own several unique and compelling environments.
Flick Golf follows on from the companies' current releases, Deadball Specialist and Zombie Flick.

Flick Golf Features:
 * Try to shoot the perfect score in Quickshot mode
 * Play World Tour, from the USA's West Coast to Japan in the Far East
 * Varying wind from the lightest breeze, to full on gales
 * Incredibly accurate in-flight spin control
 * Stunningly realised 3D environments
 * Integrated Openfient leaderboards and achievements
 * Full high resolution Retina Display graphics for iPhone 4 and iPad

Device Requirements:
 * iPhone, iPod touch and iPad
 * Requires iPhone OS 3.1 or later
 * 45 MB

Pricing and Availability:
Flick Golf for iPhone will be priced at $2.99 (USD) and Flick Golf HD for iPad priced at $4.99. Both will be available for purchase on the App Store during this November, 2010.

Thursday, November 18, 2010

iRight releases the Official Hip Parade app on the BandKit framework


Official Hip Parade app has appeared on the App Store developed by iRight and powered by BandKit an innovative new iOS framework. The fans will be delighted with this new app for iPhone, iPod touch and iPad devices. It provides fans with all the information about what is happening in world of Hip Parade up to date.

Feature highlights:
 * Exclusive access to 2 audio tracks with coverflow mode
 * A Photo thumbnail gallery providing access to a collection of artist images, interacting via familiar pinch, swipe and multi-touch gestures
 * Direct feeds from all Hip Parade social networks including MySpace, FaceBook and Twitter
 * Streaming videos
 * Gigs listings
 * Buy music from iTunes
 * Full Retina Display compatibility
 * iOS 4 Background audio multi-tasking

Device Requirements:
 * iPhone and iPod touch
 * Requires iPhone OS 3.0 or later
 * 10.8 MB

Pricing and Availability:
Hip Parade 1.0 is only 0.59 (GBP) and is available exclusively in the UK AppStore in the Music category.


For more information click here

Wednesday, November 17, 2010

OpenGL ES Update

Sorry for the silence around here lately.

Unfortunately, the next chapter of OpenGL ES 2.0 I plan to release contains detailed, step-by-step instructions based on Xcode 4 (mostly written around the time of DP2) which is still under NDA. As a result, this chapter is going to take a little longer to scrub, and I haven't had much time to scrub lately.

In the meantime, I realized that I've never linked to the PowerVR Insider SDK, so I'm rectifying that. The company that makes the GPU in all iOS devices has an SDK you can download - in fact, they have several versions of it for different platforms, including iOS. Most of the code is fairly generic C++ with just enough Objective-C around them to work, but there's a metric buttload of good code there for doing all sorts of things. Definitely worth signing and downloading, because they show you how to do a lot of common tasks. It's not very beginner-friendly, granted, but still a great resource you should know about. Most of the code is general OpenGL ES and not actually specific to their hardware, though some of the texture-related tools and optimizations are designed for best performance on their hardware and in some cases use vendor-specific extensions. If you're an iOS-only dev, that's not a problem at all unless Apple changes their GPU vendor.

Dunnottar releases SocioSpy for iPhone: People search in Social Networks

Dunnottar’s just released social networking tool for iPhone, iPad and iPod touch is available on the App Store. SocioSpy 1.0 provides with fast and simple way of finding people on most popular social networks whom you’re interested in.
SocioSpy includes 21 popular social networks, such as: Facebook, MySpace, FriendFeed, Twitter, Hyves, Youtube, LinkedIn, Blogger, Flickr, Xing, Digg, TypePad, WindowsLive, Posterous, LastFM, Reddit, StumbleUpon, Schoolbank, Yahoo Profiles, Ning and Google Profiles. All the information about people on above mentioned social networks are open to SocioSpy.
The app features stylish and unique interface it’s like you’re working with old paper documents. SocioSpy is useful for business as well because of quick access to information about everybody. To start search just fill in special blank with fields for name and select the icons of appropriate social networks. You get results like a list of folders with short dossiers and people's photos. Short dossier is a personal information including location, interests, amount of friends, and other information which people publish in social networks.

Feature Highlights:
 * Unique functionality
 * Search in 21 social networks
 * Storing search result
 * Storing people photos on Iphone
 * Information about person in short dossier and their web profile
 * Possibility to send message to fended people
* Stylish interface

Device Requirements:
 * iPhone, iPod touch, and iPad
 * Requires iOS 3.1.2 or later
 * 5.6 MB

Pricing and Availability:
SocioSpy 1.0 is $1.99 USD and available worldwide exclusively through the App Store in the Social Networking category. 

For more information click here

Tuesday, November 16, 2010

Cuban Slang for iOS Out Now For Free

Announcing the availability of Cuban Slang on the App Store created for iOS by Badboi Creations. Cuban Slang is a perfect opportunity to learn Cuban slang. The app comes with 8 slang terms and offers a great fun as you get to learn slang terms from another language.
You have to press the Play button and a real Cuban speaks out the term.
Besides, you can fool your friends with this app. you can call one of your friends and start playing the slang terms from Cuban Slang and you’ll get a lot of laughs out of it.


Device Requirements:
 * iPhone, iPad or iPod touch
 * Requires iOS 3.0 or later (iOS 4.0 Tested)
 * 4.8 MB

Pricing and Availability:

Cuban Slang 1.0 is free and is available worldwide exclusively through the App Store in the Reference category. The free version of Cuban Slang comes with the option to buy the full version from within the app for $0.99 and it will automatically unlock the over 85 slang terms that the full version has, or you can also purchase Cuban Slang Full from the app store for $0.99 and get the over 85 slang terms instantly.

For more information click here

Monday, November 15, 2010

Discover Musical Instruments, a new captivating iOS App for young kids

Introducing the release of Discover Musical Instruments-the third app in series of four developed by Mathieu Brassard an independent developer. The app for iPhone, iPad and iPod touch was created specially for babies and preschoolers containing rich visual and auditory content teaching bout musical instruments.
The app provides children with 36 high-quality pictures of musical instruments. And when the kid touches the picture of the musical instrument in landscape mode he/she will hear the sound of it. The picture also shows the name of the instrument and in case of touching the name it will pronounce it.
Discover Musical Instruments is easy to use and will captivate you kids for sure. It also includes a good variety of wind, strings and percussion instruments.

Device Requirements:
 * Compatible with iPhone, iPod touch and iPad
 * Requires iOS 3.1.2 or later
 * 17.5 MB

Pricing and Availability:
Discover Musical Instruments version 1.0 is available immediately on the App Store with a limited-time introductory price of $0.99 USD. The regular price of the App is $1.99 USD.


For more information click here

Friday, November 12, 2010

Home Run Derby Challenge for iOS now Free with 2.0 Update

Informing that updated version of the succesful baseball hitting game has jut been released and more it is for free on the Apple’s App Store. Home Run Derby Challenge version 2.0 made by Badboi Creations includes 4 new unlockable characters and 4 new unlockable stadiums to play in. It is accelerometer based game for iOS featuring an awesome great graphics. So don’t be late to download the app form the App Store till it’s Free!
You can also get the Pro edition of Home Run Derby Challenge which has online multiplayer and online leaderboards.

Feature Highlights:
 * Home run game tests player's speed, reflexes, and coordination
 * Realistic graphics, animation, and sound
 * Accelerometer measured swings
 * Random pitch speeds
 * Batters can get a hit, a miss, a home run, or a check swing
 * Extra points for fastball home runs
 * Bonus points for home runs on a streak

Device Requirements:
 * iPhone, iPad or iPod touch
 * 14.3 MB

Pricing and Availability:
Home Run Derby Challenge is free for a limited time only and is available worldwide exclusively through the App Store in the Games category.

For more information click here

Thursday, November 11, 2010

Bella Girl - Beauty Assistant iPhone version introduced by Mooee

The iPhone version of the famous and successful iPad app-Bella Girl has just been released by Mooee. Bella Girl is a simple app enjoying every women by assisting them in organizing all women issues.
Bella Girl a women’s beauty assistant includes several beauty and shopping tools as a BMI calculator, a calorie counter, a beauty treatment timer, a size converter for rings, shoes and bra, a period and ovulation estimator, and smart compare for those avid shoppers.
This just release app is designed with new iOS 4 features and contains retina display.
The app was created for the girls who care about their beauty offering them a great number of beauty tools comforting them in beauty care needs.

Bella Girl - Beauty Assistant consists of a set of beauty and shopping tools for girls:
* Ideal Weight (BMI) - Find out where you stand in fitness compare to your peers
* Calorie Counter - Find out the calories of what you eat or what you exercise off
* Timer - Special beauty timer allows you to keep tract of your face mask treatment time, nails drying time, etc etc.
* Event Calendar - Specialized calendar to keep track of your beauty treatment schedules, your period and ovulation schedules
* Converter - To convert rings, bras and shoes in foreign sizes
* Smart Compare - To allow you to compare and decide which cosmetics to buy that is beter value

Feature Highlights:
* Bella Girl has a personal mode that remembers all your personal measurements and requirements so that you can access any tools and retrieve your personal data immediately, no need to re-enter them
* Bella Girl also has a guest mode, where you can share the App with your friends to use the beauty tools
* Bella Girl has a privacy lock so your period and ovulation dates will not be visible to others
* Timer has specific presets for you to set your nail polish drying time, hair coloring time, beauty bath soak or jacuzzi time and most importantly your face mask time. It allows you to customize your specific brand of beauty products
* Converter specially for ring, shoe and bra size converts between US sizes and other international sizes

Device Requirements:
* iPhone, iPod touch, and iPad
* Requires iOS 3.1.3 or later (iOS 4.0 Tested)
* 40.9 MB

Pricing and Availability:
Bella Girl - Beauty Assistant for iPhone is only $3.99 (USD) and available exclusively through Apple's App Store.

For more information click here

Tuesday, November 9, 2010

Outside Ventures LLC Announces Fishing Trip Organizer for iPhone

Outside Ventures LLC is announcing today the release of their new app called Fishing Trip Organizer 1.1. Outside Ventures LLC is known as a developer of lifestyle Applications for iPhone, iPod and iPad touch.
Just released app is a complete fishing trip planner that has no competitor on the App Store and never had.
The Fishing Trip Organizer is a simple app designed for all fishing enthusiasts of all ages and abilities. You can plan, track and remember all of your fishing trips from tournament fishing, to family trips and local outings.
Plan the perfect trip, invite your friends, log your memories, and leave with your bragging rights - firmly intact!

Features:

* Plan, Organize, Track, and View all of your Fishing Trips (past or present)
 * Share trip details with fellow travelers, including: Trip Dates and Location, Destination and Guide information, Packing Lists, To Do Lists, and Custom Notes
 * Record all Fish Caught by Type, Size, Date, and Location caught via Google Maps
 * Take Photos and View Fish Caught - directly from App
 * Daily Journal Feature
 * Instant Dial and Email for Traveler and Destination Contacts
 * Google Maps Feature
 * Instant Weather Forecast

Device Requirements:
 * iPhone (GPS-enabled) and iPod touch
 * Requires iOS 4.1 or later
 * 8.9 MB

Pricing and Availability:
Fishing Trip Organizer 1.1 is only $4.99 (USD) and available worldwide, exclusively through the Apple iTunes App Store in the Lifestyle category.



For more information click here

Monday, November 8, 2010

Server Solutions Group releases Reading Log for iPhone/iPod touch

Reading Log 1.0 for iPhone and iPod touch has just been released by Server Solutions Group.
Reading Log app was developed by parents and student with total forces. It is perfect source for parents to track their children what book they are reading or how long have they been reading. You can even find how many pages have they read.
A simple test in elementary and middle school shows that for kids reading for 20 minutes per day is required. So the app will assist you in controlling the reading progress of your child.

We've included the following features:
 * Track multiple readers
 * Track multiple books
 * Assign readers to the books most appropriate for them
 * Optionally record how many pages were read in the allotted time
 * Produce simple summary and detailed reading reports
 * Email created reports

Device Requirements:
 * iPhone and iPod touch
 * Requires iOS 4.0 or later
 * 0.5 MB

Pricing and Availability:
Reading Log 1.0 is only $0.99 USD and available worldwide exclusively through the App Store in the Utilities category.

For more information click here

Friday, November 5, 2010

Jobjuice Releases New Finance and Investment Banking App

Announcing the release of Finance and Investment Banking App by Jobjuice for Apple consumers. Jobjuice’s new app is supposed to be best interview preparation and reference tool including all information that is necessary for finance and investment banking interviews. It is a complete MBA level finance review and investment banking guide for iPhone, iPad and iPod touch.
The app has to offer several features including over 60 cards and a complete section on interview strategies. Interview strategies will help to prepare interviews for the most demanding job at top firms and financial institutions.
App content is intuitively laid out and cross referenced between the accounting, valuation, capital markets and LBOs and M&A sections, allowing comprehensive review of key financial concepts and dynamic preparation.

The Jobjuice Finance and Investment Banking App allows users to:
 * Review the finance and investment banking interview and gain interview tips
* Read and go through cards filled with essential accounting, finance and valuation concepts and frameworks
* Create their own groups of cards by topic or interview question
* Use the practice section to test their recall by flipping through random cards
 * Use the Q&A section to practice typical finance case questions
* Use easy links within cards to access related information/cards
 * Find topics easily using the deck's search engine
* Navigate through all topics easily and intuitively

Supported Languages:
 * US English, Japanese, Korean and Spanish

Device Requirements:
 * iPhone, iPod touch, and iPad
 * Requires iOS 3.0 or later
 * 3.5 MB

Pricing and Availability:
The English version of Jobjuice Finance and Investment Banking 1.0 is $9.99 USD and available exclusively through the App Store in the Business category.


For more information click here

Thursday, November 4, 2010

NSExpression

The relatively new NSExpression class is incredibly powerful, yet not really used very often. Part of that is that it's not very well documented. Although the API documentation for NSExpression is fairly well detailed, the listed "companion guide" (Introduction to Predicates Programming) has very little information about how to actually use NSExpression.

NSExpression deserves to be better documented, because it brings to predicate programming (including Core Data), a lot of features from the relational database world that people often complain are missing, like unioning, intersecting, and subtracting resultsets and performing aggregate operations without loading managed objects or faults into memory.

The aggregates functionality is especially important on iOS given the limited memory on most iOS devices. If you've got a large dataset, and you want to get a count of objects, or calculate an average or sum for one of the attributes, you really don't want to have to pull the entire dataset into memory. Even if they're just faults, they're going to eat up memory you don't need to use because the underlying SQLite persistent store can figure that stuff out without the object overhead.

I don't have time to do a full NSExpression tutorial, but I thought it at least worth posting a category on NSManagedObject that lets you take advantage of some of its more useful features.

With this category, to get a sum of the attribute bar on entity Foo, you would do this:
NSNumber *fooSum = [Foo aggregateOperation:@"sum:" onAttribute:@"bar" withPredicate:nil inManagedObjectContext:context];



This will calculate it for you using the database features, NOT by loading all the managed objects into memory. Much more memory and processor efficient than doing it manually.

Cheers. Category follows:

Header File:
@interface NSManagedObject(MCAggregate)
+(NSNumber *)aggregateOperation:(NSString *)function onAttribute:(NSString *)attributeName withPredicate:(NSPredicate *)predicate inManagedObjectContext:(NSManagedObjectContext *)context
@end



Implementation File:
+(NSNumber *)aggregateOperation:(NSString *)function onAttribute:(NSString *)attributeName withPredicate:(NSPredicate *)predicate inManagedObjectContext:(NSManagedObjectContext *)context
{
NSExpression *ex = [NSExpression expressionForFunction:function
arguments:[NSArray arrayWithObject:[NSExpression expressionForKeyPath:attributeName]]
]
;

NSExpressionDescription *ed = [[NSExpressionDescription alloc] init];
[ed setName:@"result"];
[ed setExpression:ex];
[ed setExpressionResultType:NSInteger64AttributeType];

NSArray *properties = [NSArray arrayWithObject:ed];
[ed release];

NSFetchRequest *request = [[NSFetchRequest alloc] init];
[request setPropertiesToFetch:properties];
[request setResultType:NSDictionaryResultType];

if (predicate != nil)
[request setPredicate:predicate];

NSEntityDescription *entity = [NSEntityDescription entityForName:[self className]
inManagedObjectContext:context
]
;
[request setEntity:entity];

NSArray *results = [context executeFetchRequest:request error:nil];
NSDictionary *resultsDictionary = [results objectAtIndex:0];
NSNumber *resultValue = [resultsDictionary objectForKey:@"result"];
return resultValue;

}

@end

CaffeineLabs Raises the Bar(code) in the Mobile Price Comparison Space

Barcode Scanner Plus 1.0 has just been released by CaffeineLabs for iPhone and iPod touch. Barcode Scanner Plus provides you with quite enough information to make proper purchase decisions during the holiday season.
CaffeineLabs’s new app make you available to scan barcodes with iPhone's built-in camera and compare tens of millions of product prices online.
Barcode Scanner will accurately scan any commercial barcode and search for lowest prices of millions of products just in a few seconds and here lies the success of this app.

Feature Highlights:
 * History of all scanned products
 * Auto Scan - Just point the camera at the barcode, no buttons to push
 * Email all results and links to review later on a Laptop/Desktop computer
 * Manually search by product name if there are no barcodes
 * Scans all commercial barcodes, including: EAN13, UPCA, EAN8, UPCE, QR

Device Requirements:
 * Internet Access (WiFi or Cellular)
 * Supports iPhone (3GS, 4)
 * Camera for auto scanning, or manually enter any barcode or product name without a camera

Pricing and Availability:
Barcode Scanner Plus 1.0 is offered at an introductory rate of $0.99 USD and available worldwide exclusively through the App Store in the Utilities category.


For more information click here

Wednesday, November 3, 2010

Sudoku! 1.9 for iPhone has been released

Mayan Software is announcing today the release of “Sudoku” for iPhone, iPod touch, and iPad. New Sudoku app was created and designed to be an easy to pick up and play experience for everyone.
The unique game contains elegant layout and attractive interface design. It offers 2 game modes- play against time or choose to just lay back and relax while solving your puzzles.
Each puzzle provides with it’s own best time recorded adding it's replay value so you can always compete against yourself.
You can choose difficulty and the game will start in seconds. Sudoku also features a global leader board and to compete among other players around the world is available.
Every functionality can quickly be located and used within the main screen. Sudoku will be easy to learn for the beginners as well.

 Among the options that are available are:

Features include:
 * Unlimited undo and redo
 * Support the gorgeous Retina Display
 * 2 Game modes including "Relax" and "Challenge"
 * 20,000 sudoku puzzles ranging in 5 difficulties
 * 5 levels of difficulty: easy, medium, hard, extreme and devilish
 * Save and Resume
 * Global and local leader boards
 * Relaxing music and sound effects
 * Automatic notes
 * Intelligent hints for each puzzle
 * Highlight same and conflict numbers
 * Other game options to help any sudoku players such as Show Repeat Number, Smart Buttons or Smart Notes

Device Requirements:
 * iPhone, iPod touch, and iPad
 * Requires iOS 3.0 or later (iOS 4.0 Tested)
 * 9.4 MB

Pricing and Availability:
Sudoku! 1.9 is only $0.99 (USD) and available worldwide exclusively through the App Store in the Games category.




For more information click here

Tuesday, November 2, 2010

OpenGL ES 2.0 for iOS, Chapter 4 - Introducing the Programmable Pipeline

The code for this chapter can be found here.
I've mentioned OpenGL ES 2.0's programmable pipeline, but it may not be clear to you exactly what that term means. If that's the case, let's clear it up now. The term “pipeline” refers to the entire sequence of events, starting from when you tell OpenGL ES to draw something (usually called rendering), through the point where the objects submitted have been fully drawn. Typically, an OpenGL ES program is repeatedly drawing as the program runs, with each completed image begin referred to as a frame.

Versions of OpenGL ES prior to 2.0 (including 1.1, which is supported by all iOS devices) used what's called a fixed rendering pipeline, which means that the final image was generated by OpenGL ES without any chance for you to do anything. A better term for it might have been “closed pipeline”, because you shove stuff in one end, and it comes out the other end and you have no ability to influence it once it starts going down the pipeline

In the fixed pipeline, the entire image is rendered based on the values you submit to OpenGL ES in your Application's previous API calls. Every time OpenGL ES 1.x renders something, it does so using the same set of algorithms and calculations. If you want a light, for example, you call a handful of OpenGL ES functions in your application code to define the kind of light you want, the position of the light, the strength of the light, and perhaps a few other attributes. OpenGL ES 1.1 then takes the information you've provided and does all the calculations needed to add the light to your scene. It figures out how to shade your objects so that they look like the light is hitting them and draws them accordingly. The fixed pipeline insulates you from a lot of things. It basically says, “Oh, honey… give me the information about your scene, and don't worry your pretty little head about all the math.”

The good thing about fixed pipeline programming is that it's conceptually straightforward and easy. Oh, I know… it doesn't feel easy when you're learning it, but compared to the programmable pipeline, the basic idea is much easier to grasp. Want to set up a view that simulates perspective? OpenGL ES will basically do it for you if you give it a handful of inputs using a couple of API calls. Want to move, rotate, or scale an object? There are functions to do that for you, too. Want to add a light or six to your scene? Just make a few calls per light before drawing, and you're good to go.

The bad thing about the fixed pipeline is that it limits what you can do. A lot of the fancy lighting and texturing effects that you see in modern 3D games, for example, can't be created easily (or at all) using the fixed pipeline. With the fixed pipeline, you're only able to do what the authors of the graphics library anticipated you might need to do, in the way they anticipated you would need to do it. Want a lens flare or depth of field? Well, you can probably figure out a way to do these kinds of things using the fixed pipeline, but it won't be easy or straightforward. People have come up with some really ingenious workarounds to outwit the limitations of the fixed pipeline, but even if you do manage to find a way to work around the limitations of the fixed pipeline to achieve some effect, your code's likely to be a bit of a hack---and more importantly, some of the code you write to implement that functionality is probably going to be running in the wrong place for best performance. Let's talk about why that is, because it's a critical piece of information once we start working with the programmable pipeline.

Hardware Architecture

OpenGL ES insulates you from having to code to any specific hardware, but it's important to understand, at least at a very high level, how iOS devices calculate and display graphic content. Every iOS device ever made has two processors inside of it. They all have a general purpose processor called the CPU, as well as a second processor called a GPU, which stands for graphics processing unit.¹The CPU can do most anything you need it to do, and it's where your application's code primarily runs. The CPU is very fast at doing integer operations, but not anywhere near as fast when it comes to doing floating point operations². The GPU, on the other hand, is much more specialized. It's good at doing large numbers of small floating point calculations very quickly. It was designed to work as a helper to the CPU to handle those tasks that the CPU isn't particularly good at, rather than serving as a standalone processor. The CPU, in essence, hands off certain tasks that the GPU is better at performing. With the two processors working in parallel, the device is capable of doing a lot more work at one time. But this “helping” doesn't happen automatically in your programs.

When you write C, Objective-C, or C++ code in Xcode, the compiled binary code executes on the CPU. There are some libraries, such as Core Animation, that implicitly hand off tasks to the GPU on your behalf, but generally speaking, you have to use special libraries to get access to the GPU. Fortunately, OpenGL ES, is just such a library. Both with the fixed pipeline and the programmable pipeline, most of the OpenGL ES rendering happens on the GPU. That makes make sense, if you think about it: calculations for graphics are what the GPU was designed to do (hence the “G” in GPU). Much of OpenGL ES 2.0's pipeline, like all of the fixed pipeline, is outside your control. There are, however, two specific places where you can (and, in fact, must) write code. The code you write runs on the GPU and can't be written in Objective C, C, or C++. It has to be written in a special language specifically designed for the purpose. Programs that you write for the programmable pipeline run on the GPU and are called shaders. The language you write shaders in is called GLSL, which stands for GL Shading Language.

The term shader is another example of nonintuitive naming in OpenGL. Shaders are nothing more than small pieces of executable code that run on the GPU instead of the CPU. Among the tasks they perform is the calculation of the shading (or color) of each pixel that represents a virtual objects, but they can do far more than that. Shaders are fully fledged programs written in a Turing-complete programming language.

OpenGL ES Shaders

There are two types of shaders that you can write for OpenGL ES: vertex shaders and fragment shaders. These two shaders make up the “programmable” portion of the OpenGL ES 2.0 programmable pipeline. The GLSL language that you program these shaders with uses a C-like syntax. We'll look at a few simple examples of GLSL in this chapter, and we'll work with it extensively throughout the rest of the book.

An important thing to realize about shaders is that they are not compiled when you build your application. The source code for your shader gets stored in your application bundle as a text file, or in your code as a string literal. At runtime, before you use the shaders, your application has to load and compile them. The reason for this deferred compilation is to preserve device independence. If shaders were compiled when you built your application and then Apple were to change to a different GPU manufacturer for a future iPhone release, the compiled shaders very well might not work on the new GPU. Deferring the compile to runtime avoids this problem, and any GPU---even those that don't exist when you create your application---can be supported fully without a need to rebuild your application.

Vertex Shaders

The shader that runs first in the OpenGL ES pipeline is called the vertex shader because it runs once for every vertex that you submit to OpenGL ES. If you submit a 3D scene with a thousand vertices, the vertex shader will get called a thousand times when you submit that scene to OpenGL ES to be drawn, once per vertex. The vertex shader is where you do what OpenGL ES imaginatively calls vertex processing. It's where you handle moving, scaling, or rotating objects, simulating the perspective of human vision, and doing any other calculation that affects vertices or depends on some piece of data that you have on a per-vertex basis.

Shaders do not have return values, but both vertex and fragment shaders have required output variables that you must write a value to before the end of the shader's main() function. These output variables essentially function as required return values. For the vertex shader, the required output is the final position for the current vertex. Remember, the shader gets called once for each vertex, so the output of the shader is the final position of the vertex the shader is currently running for. In some cases, it may just be the vertex's original value, but more often than not, each vertex will be altered in some way. Doing calculations on vertices to scale, rotate, or move an object is something the GPU is much better at than the CPU, so typically, we don't try to implement those kinds of calculations in our application code, but instead do them here in the vertex shader. Once your shader has calculated the final position for a given vertex, it needs to set a special output variable called gl_Position. If your vertex shader doesn't write to gl_Position, it results in an error.

There's a slight catch, however. The gl_Position variable is a vec4 variable, which stands for vector 4. The vec4 is a datatype that contains four floating point values. You probably remember that in a Cartesian coordinate system, we use three values (X, Y, Z), not four, so it would seem like the required output should be a vec3, which contains three floating point values just the Vertex3D struct we've wrote last chapter. The first three values in gl_Position represent the Cartesian X, Y, and Z values for the current vertex. The fourth value is typically called W. Don't worry too much about why there's an extra component. It will become important a few chapters from now when we start talking about something called matrix transformations, but for now, just think of W as a work value (that's not actually what it stands for, though) that we'll need in order to do certain calculations to manipulate our vertices. Unless you know w should be set to a different value, set it to 1.0.

Here is an extremely simple example of a vertex shader:

void main()
{
gl_Position = vec4(0.0, 0.0, 0.0, 1.0);
}

All this shader does is move each vertex to the origin. The function vec4() is built into GLSL, and all it does is create a vector datatype with four members. We're using it to create a vertex at the origin (0,0,0) with a w value set to 1.0. By assigning a value to gl_Position, we are indicating the final position of the current vertex. This is not, perhaps, a very practical fragment shader example — any model you submit to this shader will get turned into a dot at the origin — but it is a simple one that illustrates how you set the final value of the vertex, which is the one task every vertex shader must do every time it runs.

We'll be doing a lot of work with vertex shaders throughout the book; don't worry if you don't fully understand them yet. It's a complex topic, but they'll start to make sense once you've used them. For now, the important points to remember about vertex shaders are:
  • Vertex shaders run once for every vertex that OpenGL ES draws.
  • Vertex shaders must set gl_Position to indicate the location of the current vertex using Cartesian coordinates (x,y,z), along with an additional value called W. For the time being, we'll always set W to to 1.

Fragment Shaders

The second programmable part of the OpenGL ES 2.0 programmable pipeline is called a fragment shader, and it's called that because, well… the fragment shader runs once for every fragment in the drawing operation. That's probably not very helpful, huh? So… what's a “fragment”?

Think of a fragment as a possible drawn pixel. A fragment includes all of the various things in the virtual world that could potentially affect one pixel's final color. Imagine that an OpenGL ES view on your iPhone or iPad's screen is a window into a virtual world. Now pick a single pixel in your OpenGL view. If you were to take a slice of your virtual world starting with that pixel, and moving into the virtual world as far as the eye can see, everything that lies behind that one pixel constitutes the fragment for that pixel. Sometimes you'll see fragment shaders called pixel shaders. This is actually a bit of a misnomer, but it's helpful for visualizing a fragment.

Like vertex shaders, fragment shaders have a required output, which is the final color of the pixel that corresponds to the current fragment. You indicate the pixel's color by setting a special GLSL variable called gl_FragColor. Here is the simplest possible fragment shader; it just sets the fragment's color to an opaque blue:

void main()
{
gl_FragColor = vec4(0.0, 0.0, 1.0, 1.0);
}

Colors, as we saw in the last chapter, are represented by four components in OpenGL ES (red, green, blue, and alpha), and OpenGL ES expects those components in that specific order. GLSL doesn't have a datatype specifically designed for holding colors. Instead, it uses the same datatype it uses for vectors and vertices, so by building a vec4 (a vector datatype with four floating point members) with these four values, we are creating a color in which red and green are set to zero, and blue and alpha are set to one, which is an opaque blue. By assigning that value to gl_FragColor, we're telling OpenGL ES how to draw the pixel that corresponds to this fragment.

You might expect this fragment shader to create a view that's filled entirely with blue, but that's not necessarily what it does. Understanding this will help you understand the difference between a fragment and a pixel. Each frame starts empty, with the background set to a specific color — often black, but it can be set to any color. The vertex data (and other data) describing the scene to be drawn are submitted to OpenGL ES, and a function is called to kick off the rendering pipeline. If there's nothing in the virtual world that can affect a particular screen pixel, the fragment shader doesn't run for that pixel; it just gets left at the background color. This is the reason the term “pixel shader” is not technically correct: a pixel with no corresponding fragment doesn't get processed by the shader. A fragment has one and only one pixel, but a pixel doesn't necessarily have to have a fragment.


pixel_no_fragment.png


This scene contains a single texture-mapped object. All the area that is drawn in black are pixels with no fragment because no object in the scene can affect their final color.


What the fragment shader above does is set any pixel that has part of one or more virtual objects “behind” it (so to speak) to blue. That's probably a little confusing, but it will become clear when we write our first OpenGL ES application in the next chapter. For now, the points to remember about fragment shaders are the following:
  • Fragment shaders run once for every fragment, which means once for every pixel in which something can potentially be drawn.
  • Fragment shaders must set gl_FragColor to indicate the color that the fragment's pixel should be drawn.

Sending Data to the Shaders

Shaders do not have access to your application's main memory. Any data that a shader needs to do its job has to be specifically sent over to the GPU from your application code. Sending this data incurs overhead and can be a bottleneck in the rendering pipeline. In order to keep rendering performance up, it's important to only send the data that your shaders need. There are two types of data you can send from your application code to your shaders: attributes and uniforms.
Attributes
An attribute is data for which you have one distinct value for each vertex being submitted. If, for example, you are submitting a scene with a thousand vertices, any attributes you pass must contain a thousand values. If you have an attribute of colors, you must pass in a thousand colors. If you have an attribute of vectors, you must pass in a thousand vectors. You will virtually always have at least one attribute, containing the Cartesian coordinates of each vertex to be drawn or, at least, the starting position of each vertex before it gets transformed by the vertex processor. Without this data, there's really no way to do anything in your vertex shader. You can only submit floating point data in an attribute, not integer data, though you can provide multiple floating point values to each vertex in a single attribute. A color, for example, contains four floating point numbers, so to provide data for color attribute, you need to provide an array containing 4 floats multiplied by the number of vertices being submitted. That same attribute will comes into the shader as a single vec4.

Each time your vertex shader runs, the pipeline will provide it with just the value that corresponds to the vertex that the shader is executing for. So, in your application code, attributes are represented by an array with one or more values for each vertex, but in your vertex shader, you must deal with only a single chunk of data from that submitted array, which contains the values that correspond to the current vertex. We'll see how to send attributes from your application to the shader a little later in the chapter, but here's how you work with an attribute insode your vertex shader:

attribute vec4 position;

void main()
{
gl_Position = position;
}

It's pretty straightforward; you declare the attribute at the top of the shader, and that's pretty much all you have to do in on the shader side. The OpenGL ES pipeline takes care of handing your shader the right data element each time. That means you can treat the attribute (position, in this case) as an input variable, almost like an argument to a function. In this example, we're taking the value from the position attribute for this vertex and assigning it as-is to the special gl_Position output variable. In this case, our final position for each vertex is the starting position that was supplied to us by our application code. We'll see how to send attributes from our application code a little later in this chapter - there's some other information we need to go over before it will make sense.\
Uniforms
Uniforms are the second kind of data that you can pass from your application code to your shaders. Uniforms are available to both vertex and fragment shaders — unlike attributes, which are only available in the vertex shader. The value of a uniform cannot be changed by the shaders, and will have the same value every time a shader runs for a given trip through the pipeline. Uniforms can be pretty much any kind of data you want to pass along for use in your shader.

We'll look at how to pass uniforms from your application code a little later, but in your shader, working with a uniform is just like working with an attribute. You declare it at the top and then treat it as an input value in your code, like so:

attribute vec4 position;
uniform float translate;

void main()
{
gl_Position = position;
gl_Position.y += translate;
}

In this example, we're passing a floating point value called translate, then using it to modify the gl_Position output variable, moving the vertex along the Y axis based on the value of the translate uniform. NB: This is not how you would normally move an object in your shader. This is just a simple example to illustrate how uniforms work.

Varyings

Since attributes are only available in the fragment shader and the value of uniforms can't be changed, how can the fragment shader know what values to use when drawing a given pixel? Let's say, for example, that we have an attribute containing per-vertex colors. In order to be able to determine the final pixel color in our fragment shader, we would need access to that particular piece of per-vertex information, wouldn't we?

Why, yes, we would. And that's where something called a varying comes into play. Varyings are special variables that can be passed from the vertex shader to the fragment shader, but it's cooler than it sounds. There is no set relationship between vertices and fragments. So, how can a value from the vertex shader be used later in the fragment shader? How does it figure out which vertex's value to use? What happens with varyings is that the value set in the vertex shader is automatically interpolated for use in the fragment shader based on the fragment's pixel's relative distance from the vertices that affect it. Let's look at a simple example. Say we're drawing a line:

fragment.png
A varying set in the vertex shader for V1 and V2 would have a value halfway between those two values when the fragment shader runs for fragment F. If the varying color was set to red in the vertex shader for V1 and to blue in the vertex shader for V2, when the fragment shader for the fragment corresponding to the pixel at F runs and reads that varying, it will contain neither blue nor red. Instead, it will have a purple color, halfway between red and blue because that fragment is roughly halfway between those two vertices. The pipeline automatically figures out which vertices affect the drawing of a given fragment and automatically interpolates the values set for the varyings in the vertex shaders based on the relative distances of the fragment from those vertices.

triangle.png


Varying interpolation is not limited to interpolating values from two vertices, either. The pipeline will figure out all the vertices that influence the fragment and calculate the correct value. Here is a simple example with three vertices each with a different color.




Variables are easy to use: you just declare them in both shaders. Then any value you set in the vertex shader will be available, in interpolated form, in the fragment shader. Here's a very simple example of a vertex shader that assigns a value from a per-vertex color to a varying:

attribute vec4 position;
attribute vec4 color;

varying vec4 fragmentColor;

void main()
{
gl_Position = position;
fragmentColor = color;
}

In this example, color is the color for this vertex that was passed in from our application code. We've declared a varying called fragmentColor to let us pass a color value to the fragment shader. We've declared it as a vec4 because colors are comprised of four component values. In addition to setting gl_Position based on the vertex's position value that was passed into the shader using the position attribute, we also assign the value from the color per-vertex attribute to the varying called fragmentColor. This value will then be available in the fragment shader in interpolated form.

Screen shot 2010-11-01 at 8.00.15 PM.png


In the shader above, if we drew a line and had an attribute that defined the color at the first point as red and the color at the second point as blue, this is what would get drawn.


Here's what a simple fragment shader using that same varying would look like:

varying lowp vec4 fragmentColor;

void main()
{
gl_FragColor = fragmentColor;
}

The declaration of the varying in the fragment shader has the same name (fragmentColor) as it did in the vertex shader. This is important; if the names don't match, OpenGL ES won't realize it's the same variable. It also has to be the same datatype. In this case, it's vec4, just like it was in the vertex shader. Notice, however, that there's an additional keyword, lowp. This is a GLSL keyword used to specify the precision or, in other words, the number of bytes used to represent a number. The more bytes used to represent a number, the less problems you'll have with the rounding that necessarily happens with floating point calculations. Depending on the amount of precision you need, you can specify lowp, mediump, or highp to indicate how many bytes will store the floating point value while it's being used in the shaders. The actual number of bytes used to represent a variable is decided by OpenGL ES, but the precision keyword lets you give it a hint about how much precision you think this variable needs in this situation.

GLSL allows the user of precision modifiers any time a variable is declared, but this is the one place where it is required. If you don't include it when declaring varyings in your fragment shader, your shader will fail to compile. In other places, the precision modifier is optional and the GLSL specification lays out a set of rules that will be used to determine the precision when no explicit modifier is provided.

The lowp keyword is going to give the best performance but the least accuracy during interpolation. It is often the best choice for things like colors, where small rounding errors won't really matter. When in doubt, start with lowp. You can always increase the precision to mediump or highp if the lack of precision causes problems in your application.

All we do with the value from fragmentColor, which is the interpolated version of the color values set in the vertex shader, is assign it to gl_FragColor so that the pixel gets drawn in the interpolated color. This creates a gradient between the vertices if those vertices aren't the same color.

Before we look at how to pass attributes and uniforms to the shader from our application code, we first need to talk about how shaders get loaded and compiled, because the way we pass data in relies on that mechanism. Let's look at that now; then we'll return to attributes and uniforms from the other side of the pipeline.

OpenGL ES Programs

Shaders always work in pairs in OpenGL ES. At any given moment, there can only be one active vertex shader and one active fragment shader, and when you tell OpenGL ES to draw something, an active vertex and fragment shader must already be in place. Even though only one shader pair can be active at any given moment, you can have different shader pairs for drawing different objects. This allows you to, for example, apply different lighting or different effects to objects in the same scene.

OpenGL ES has a concept called a program that combines a vertex shader and fragment shader along with their attributes into a single OpenGL ES object. You can create as many of these programs as you want, but only one of them can be active at any specific time. If you make a program active, the program that was previously active becomes inactive. Typically, programs are created and the shaders loaded and compiled when your application starts, or at some other time before you actually being drawing, such as when loading a level in a game. You don't want to wait until you need the shader to load and compile it because doing so can be a costly operation that would cause a noticeable hiccup in the drawing process.

Loading programs and getting them ready to use is a bit of an involved process. Here is the basic flow:

  1. Create and compile the shaders. The following steps must be performed twice---once for the vertex shader, and again for the fragment shader:

    1. Load the shader source code into memory.
    2. Call glCreateShader() to create an empty shader object, saving the returned value to refer to this shader in future calls.
    3. Use glShaderSource() to pass the newly created shader object its source code.
    4. Call glCompileShader() to compile the shader.
    5. Use glGetShaderiv() to check the compile status and make sure that the shader compiled correctly.
  2. Call glCreateProgram() to create an empty program and save the returned value so that you can use the program in future calls.
  3. Attach the two shaders to the program using glAttachShader().
  4. Delete the shaders using glDeleteShader(). The program will have made its own copy of the shaders, so deleting them doesn't prevent the program from working.
  5. Bind each of the vertex shader's attributes to the program using glBindAttribLocation().
  6. Link the program using glLinkProgram().
  7. When you want to use this program for the first time, or if you want to change the active program to this program, call glUseProgram().
  8. When you're done with a program, get rid of it using glDeleteProgram().

The following is an example of a fairly typical OpenGL ES program loading code for iOS 4. Don't worry too much about what this is doing, just scan it over and and shake your head a little:

GLuint          program;
GLuint vertShader;
GLuint fragShader;

GLint status;
const GLchar *source;

NSString *vertShaderPathname = [[NSBundle mainBundle] pathForResource:@"shader"
ofType:@"vsh"
]
;
source = (GLchar *)[[NSString stringWithContentsOfFile:vertShaderPathname
encoding:NSUTF8StringEncoding
error:nil
]
UTF8String]
;
if (!source)
{
// Deal with error
}


vertShader = glCreateShader(GL_VERTEX_SHADER);
glShaderSource(*vertShader, 1, &source, NULL);
glCompileShader(*vertShader);

glGetShaderiv(*vertShader, GL_COMPILE_STATUS, &status);
if (status == 0)
{
glDeleteShader(*vertShader);
// Deal with error
}


NSString *fragShaderPathname = [[NSBundle mainBundle] pathForResource:@"shader"
ofType:@"fsh"
]
;
source = (GLchar *)[[NSString stringWithContentsOfFile:fragShaderPathname
encoding:NSUTF8StringEncoding
error:nil
]
UTF8String]
;
if (!source)
{
// Error checking
}


fragShader = glCreateShader(GL_FRAGMENT_SHADER);
glShaderSource(*fragShader, 1, &source, NULL);
glCompileShader(*fragShader);

glGetShaderiv(*fragShader, GL_COMPILE_STATUS, &status);
if (status == 0)
{
glDeleteShader(*fragShader);
// Error checking!
}


glAttachShader(program, vertShader);
glAttachShader(program, fragShader);
glBindAttribLocation(program, 1, "position");

glLinkProgram(program);

glGetProgramiv(program, GL_LINK_STATUS, &status);
if (status == 0)
return NO;

if (vertShader)
glDeleteShader(vertShader);
if (fragShader)
glDeleteShader(fragShader);

glUseProgram(program);
That's pretty ugly, isn't it? It's not much fun to write, either. Fortunately, we can simplify the process quite a bit by creating our own Objective-C wrapper class to represent OpenGL ES programs. Instead of stepping through the code above and examining it, let's package that same functionality up into a more reusable form and discuss that, instead. Doing so kills two birds with one stone: it allows us to step through and understand the process involved in creating programs in OpenGL ES, and makes our lives easier down the road by saving us from having to write nasty code like that every time we need to create a program.
Writing the GLProgram class
Open up Xcode or a text editor and create two empty text files. Call one of them GLProgram.h and the other GLProgram.m. We'll be using this class in every one of the projects we create in this book, so make sure to save the two files somewhere you can find them easily. Or, if you prefer, copy my version from the code folder that came with the book.

Put the following code in GLProgram.h:

#import <Foundation/Foundation.h>
#import <OpenGLES/ES2/gl.h>
#import <OpenGLES/ES2/glext.h>

@interface GLProgram : NSObject
{

}

- (id)initWithVertexShaderFilename:(NSString *)vShaderFilename
fragmentShaderFilename:(NSString *)fShaderFilename;

- (void)addAttribute:(NSString *)attributeName;
- (GLuint)attributeIndex:(NSString *)attributeName;
- (GLuint)uniformIndex:(NSString *)uniformName;
- (BOOL)link;
- (void)use;
- (NSString *)vertexShaderLog;
- (NSString *)fragmentShaderLog;
- (NSString *)programLog;
@end


Take a look at the header file we just created; notice that we haven't created any properties, and we don't have any instance variables in our header. We haven't exposed anything here because there shouldn't be any need for other classes to have direct access to any of our instance variables. Everything the program needs to do should be handled using the various methods on our GLProgram object.


New Objective-C Features

Because this book focuses on iOS 4, I'm using a lot of newer functionality in Objective-C 2.0 & 2.1. One instance is in GLProgram: I've used Objective-C's new ability to declare instance variables in a class extension. This allows me to have private instance variables that aren't advertised to other classes because they aren't contained in my class' header file. This feature is not available on the iOS prior to the 4.0 SDK, however, so if you try to use some of the code samples from this book in older versions of the SDK, you may get compilation issues. If you run into this problem, copy the instance variable declarations from the class extension into the class's header file.

The first method in our class is our initialization method. It takes the name of the file containing the vertex shader source code and the name of the file that contains the fragment shader source code as arguments. This method loads the source and compiles both shaders as part of initializing the object.

After that, we have the method that will be used to add attributes to our program, followed by two methods that can be used to retrieve the index values for a given attribute or uniform. These index values are used to submit data to the shaders and can be retrieved any time after the program is linked. All of the program's attributes must be added to the program before linking.

The next method we declare, link, is similar to the linking that happens after you compile your application's source code. Xcode handles compiling and linking as a single step when you build your application, but with shaders, it's a necessary, separate step that links all the various components together and gets them ready to use. We couldn't link right after we compiled the shaders because OpenGL needs to know about the program's attributes before it can link properly.

The use method is called when we want to draw using this program's shaders. You can call this method repeatedly, allowing you to switch between shaders at runtime.

The final three methods are primarily for debugging purposes. Since shaders are compiled at runtime, not build time, a syntax error or other problem in a shader won't cause our application build to fail in Xcode, but it will cause the shader compile and/or the program link to fail at runtime. If the link method returns NO, these three methods are how we can find out what went wrong so we can fix it.

Make sure you save GLProgram.h, then switch over to the other text file that you named GLProgram.m and put the following code in it (or you can just copy mine out of the book's source code folder):

#import "GLProgram.h"
#pragma mark Function Pointer Definitions
typedef void (*GLInfoFunction)(GLuint program,
GLenum pname,
GLint* params)
;

typedef void (*GLLogFunction) (GLuint program,
GLsizei bufsize,
GLsizei* length,
GLchar* infolog)
;

#pragma mark -
#pragma mark Private Extension Method Declaration
@interface GLProgram()
{
NSMutableArray *attributes;
NSMutableArray *uniforms;
GLuint program,
vertShader,
fragShader;
}

- (BOOL)compileShader:(GLuint *)shader
type:(GLenum)type
file:(NSString *)file;

- (NSString *)logForOpenGLObject:(GLuint)object
infoCallback:(GLInfoFunction)infoFunc
logFunc:(GLLogFunction)logFunc;

@end

#pragma mark -

@implementation GLProgram
- (id)initWithVertexShaderFilename:(NSString *)vShaderFilename
fragmentShaderFilename:(NSString *)fShaderFilename
{
if (self = [super init])
{
attributes = [[NSMutableArray alloc] init];
uniforms = [[NSMutableArray alloc] init];
NSString *vertShaderPathname, *fragShaderPathname;
program = glCreateProgram();

vertShaderPathname = [[NSBundle mainBundle]
pathForResource:vShaderFilename
ofType:@"vsh"]
;
if (![self compileShader:&vertShader
type:GL_VERTEX_SHADER
file:vertShaderPathname
]
)
NSLog(@"Failed to compile vertex shader");

// Create and compile fragment shader
fragShaderPathname = [[NSBundle mainBundle]
pathForResource:fShaderFilename
ofType:@"fsh"]
;
if (![self compileShader:&fragShader
type:GL_FRAGMENT_SHADER
file:fragShaderPathname
]
)
NSLog(@"Failed to compile fragment shader");

glAttachShader(program, vertShader);
glAttachShader(program, fragShader);
}


return self;
}

- (BOOL)compileShader:(GLuint *)shader
type:(GLenum)type
file:(NSString *)file
{
GLint status;
const GLchar *source;

source =
(GLchar *)[[NSString stringWithContentsOfFile:file
encoding:NSUTF8StringEncoding
error:nil
]
UTF8String]
;
if (!source)
{
NSLog(@"Failed to load vertex shader");
return NO;
}


*shader = glCreateShader(type);
glShaderSource(*shader, 1, &source, NULL);
glCompileShader(*shader);

glGetShaderiv(*shader, GL_COMPILE_STATUS, &status);
return status == GL_TRUE;
}

#pragma mark -
- (void)addAttribute:(NSString *)attributeName
{
if (![attributes containsObject:attributeName])
{
[attributes addObject:attributeName];
glBindAttribLocation(program,
[attributes indexOfObject:attributeName],
[attributeName UTF8String]);
}

}

- (GLuint)attributeIndex:(NSString *)attributeName
{
return [attributes indexOfObject:attributeName];
}

- (GLuint)uniformIndex:(NSString *)uniformName
{
return glGetUniformLocation(program, [uniformName UTF8String]);
}

#pragma mark -
- (BOOL)link
{
GLint status;

glLinkProgram(program);
glValidateProgram(program);

glGetProgramiv(program, GL_LINK_STATUS, &status);
if (status == GL_FALSE)
return NO;

if (vertShader)
glDeleteShader(vertShader);
if (fragShader)
glDeleteShader(fragShader);

return YES;
}

- (void)use
{
glUseProgram(program);
}

#pragma mark -
- (NSString *)logForOpenGLObject:(GLuint)object
infoCallback:(GLInfoFunction)infoFunc
logFunc:(GLLogFunction)logFunc
{
GLint logLength = 0, charsWritten = 0;

infoFunc(object, GL_INFO_LOG_LENGTH, &logLength);
if (logLength < 1)
return nil;

char *logBytes = malloc(logLength);
logFunc(object, logLength, &charsWritten, logBytes);
NSString *log = [[[NSString alloc] initWithBytes:logBytes
length:logLength
encoding:NSUTF8StringEncoding
]

autorelease]
;
free(logBytes);
return log;
}

- (NSString *)vertexShaderLog
{
return [self logForOpenGLObject:vertShader
infoCallback:(GLInfoFunction)&glGetProgramiv
logFunc:(GLLogFunction)&glGetProgramInfoLog
]
;

}

- (NSString *)fragmentShaderLog
{
return [self logForOpenGLObject:fragShader
infoCallback:(GLInfoFunction)&glGetProgramiv
logFunc:(GLLogFunction)&glGetProgramInfoLog
]
;
}

- (NSString *)programLog
{
return [self logForOpenGLObject:program
infoCallback:(GLInfoFunction)&glGetProgramiv
logFunc:(GLLogFunction)&glGetProgramInfoLog
]
;
}

#pragma mark -
- (void)dealloc
{
[attributes release];
[uniforms release];

if (vertShader)
glDeleteShader(vertShader);

if (fragShader)
glDeleteShader(fragShader);

if (program)
glDeleteProgram(program);

[super dealloc];
}

@end

Let's take this piece by piece and make sure we're clear on what it's doing. The first section might seem a little confusing. We've defined two datatypes to represent function pointers:

typedef void (*GLInfoFunction)(GLuint program, 
GLenum pname,
GLint* params)
;

typedef void (*GLLogFunction) (GLuint program,
GLsizei bufsize,
GLsizei* length,
GLchar* infolog)
;

While writing the code for the three log methods, it became clear that all three methods were nearly identical. The two shader logs were exactly the same except for the value passed into the two OpenGL ES functions. The program log file had almost identical logic, except it used two different OpenGL ES API calls to retrieve the log data. However, those functions had exactly the same arguments in both cases. This allows us to write a generic method to handle all three types of log by accepting function pointers as a parameter, shortening the code, and making it easier to maintain---since the log logic wouldn't have to be repeated. These type definitions make the code with function pointers easier to read.

Next, we use an Objective-C extension to declare the instance variables and two private methods two methods. The first instance variable we have is a mutable array that will be used to keep track of the program's attributes. There's no reason to keep track of varyings or uniforms. The varyings are strictly between the two shaders, and declaring a varying in both shaders is all that's required to create it. We also don't need to keep track of uniforms because OpenGL ES will assign each of the uniforms an index value when it links the program. With attributes, however, we have to come up with an index number for each one and tell OpenGL ES what index number we're using for which attribute when we bind them. OpenGL ES doesn't assign attribute indices for us for the attributes. Sticking the attributes into an array and using the index values from the array is the easiest way to handle that task in Objective-C, so that's what we're doing.

After the array, we have three GLuints. These are for keeping track of the numbers that OpenGL ES will assign to uniquely identify our program and its two shaders.

Then we have two private methods, which are methods that will be used within this class, but that code outside of this class should never need access to. One is a method that compiles a shader. Since the process of compiling a fragment shader and a vertex shader is exactly the same, we create one method to do them both. The second method is the generic log method mentioned earlier that is used by the three public log methods.

#pragma mark -
#pragma mark Private Extension Method Declaration
@interface GLProgram()
{
NSMutableArray *attributes;
NSMutableArray *uniforms;
GLuint program,
vertShader,
fragShader;
}

- (BOOL)compileShader:(GLuint *)shader
type:(GLenum)type
file:(NSString *)file;

- (NSString *)logForOpenGLObject:(GLuint)object
infoCallback:(GLInfoFunction)infoFunc
logFunc:(GLLogFunction)logFunc;

@end

After that, we have a init method. This method takes the name of the two shaders (without the file extension), loads and attempts to compile both of them, and then creates a program to hold them. Once it has a program, it attaches the two shaders to the program and returns the initialized object. It also creates the mutable array that will be used to hold the attribute information. If the shaders fail to compile, it still returns a valid object. If we were to release the object and return nil, we would have no way to get to the log data that tells me what went wrong. By returning a valid object when the shader compile fails, the link step will fail and return NO, which will be the calling code's indication that something went wrong and the logs should be checked.

- (id)initWithVertexShaderFilename:(NSString *)vShaderFilename 
fragmentShaderFilename:(NSString *)fShaderFilename
{
if (self = [super init])
{
attributes = [[NSMutableArray alloc] init];
uniforms = [[NSMutableArray alloc] init];
NSString *vertShaderPathname, *fragShaderPathname;
program = glCreateProgram();

vertShaderPathname = [[NSBundle mainBundle]
pathForResource:vShaderFilename
ofType:@"vsh"]
;
if (![self compileShader:&vertShader
type:GL_VERTEX_SHADER
file:vertShaderPathname
]
)
NSLog(@"Failed to compile vertex shader");

// Create and compile fragment shader
fragShaderPathname = [[NSBundle mainBundle]
pathForResource:fShaderFilename
ofType:@"fsh"]
;
if (![self compileShader:&fragShader
type:GL_FRAGMENT_SHADER
file:fragShaderPathname
]
)
NSLog(@"Failed to compile fragment shader");

glAttachShader(program, vertShader);
glAttachShader(program, fragShader);
}


return self;
}
To actually load and compile the shaders, the init method calls the next method in the file twice, once for each shader:

- (BOOL)compileShader:(GLuint *)shader 
type:(GLenum)type
file:(NSString *)file
{
GLint status;
const GLchar *source;

source =
(GLchar *)[[NSString stringWithContentsOfFile:file
encoding:NSUTF8StringEncoding
error:nil
]
UTF8String]
;
if (!source)
{
NSLog(@"Failed to load vertex shader");
return NO;
}


*shader = glCreateShader(type);
glShaderSource(*shader, 1, &source, NULL);
glCompileShader(*shader);

glGetShaderiv(*shader, GL_COMPILE_STATUS, &status);
return status == GL_TRUE;
}
The file containing the shader source is loaded from the application bundle. If the method is unable to load the specified file, it returns NO. If it was able to get the shader's source, then it uses OpenGL ES API functions to create a shader, give the newly created shader the loaded source code, and then compile it. After compiling, the compile status is checked and a value returned based on whether the shader was successfully compiled.

Once a GLProgram instance has been created and initialized, the next thing that we need to do is to tell it what attributes the vertex shader uses. The next method in the file is used for that purpose:

- (void)addAttribute:(NSString *)attributeName
{
if (![attributes containsObject:attributeName])
{
[attributes addObject:attributeName];
glBindAttribLocation(program,
[attributes indexOfObject:attributeName],
[attributeName UTF8String]);
}

}

This method checks to make sure the attribute hasn't already been added to the attributes array before adding it, since attributes must have unique names. It also calls glBindAttribLocation() to let OpenGL ES know about the attribute. Remember: OpenGL ES needs to know about every attribute before we link the program, and this is how we do that. Attributes are identified by their index numbers, and we specify the index number from my array when we call glBindAttribLocation(). That ensures each attribute has a unique index value. The traditional approach to attributes is to create an enum for each of the attributes to be used, but our approach makes the code a little more readable and the program functionality more self-contained.

Uniforms don't need to be kept track of or added before linking. OpenGL will assign each uniform an index without the input. When the shaders are compiled, OpenGL ES will discover the uniforms. When we link the program, OpenGL ES will assign each of the uniforms used in the shaders an index value.


Attributes and Uniform Indices



Why do attribute indices have to be specified before linking, but uniform indices are assigned by OpenGL ES without any input? I don't know. As far as I've been able to discover, the reason for the difference is not specifically documented.

I have a theory, however, that it may be due to the way attributes and uniforms are stored. All attributes take up at least the space of a vec4. That means, if you have an attribute that contains a single floating point value for each vertex, it's still going to take up the same amount of register space on the GPU as an attribute that contains four floating point values for each vertex.


Uniforms, on the other hand, are packed more efficiently into the register space available on the GPU. If we have four float uniforms, for example, they will be packed together into a single register. To do this, OpenGL ES may have to reorder the uniforms to make the most efficient use of the available space.

It seems likely that since attributes won't be packed and therefore won't be reordered during the build phase; as a result, OpenGL ES can let us choose our index values for attributes. However, OpenGL ES takes on the responsibility of assigning the values for uniforms so it can make the best use of the available register space.

You can read more about the way GLSL works by reading the GLSL specification for OpenGL ES (which is different than the GLSL specification for regular OpenGL) here: http://www.khronos.org/files/opengles_shading_language.pdf. In fact, once you're done with the book and have some comfort with the general way that OpenGL ES works, I strongly recommend reading the specification. It can be a little dry, but it's full of information you should know if you're doing any serious programming with OpenGL ES.

Information specifically about uniform packing can be found in Appendix A, part 7.

The next two methods simply return the index number for a given attribute or uniform. For the attribute, it will return the index from the array because that's what we told OpenGL ES to use when we called glBindAttribute(). For uniforms, we have to ask OpenGL ES to give us the index value it assigned at link time. Note that both of these methods involve doing string compares, so if possible, they should be called only once. The controller class that creates the instance of GLProgram should probably keep track of the values returned from attributeIndex: and uniformIndex:. String lookups are costly enough that doing them a few hundred times a second could have a noticeable impact on drawing performance.

- (GLuint)attributeIndex:(NSString *)attributeName
{
return [attributes indexOfObject:attributeName];
}

- (GLuint)uniformIndex:(NSString *)uniformName
{
return glGetUniformLocation(program, [uniformName UTF8String]);
}

Next up is the method that gets called to link the program after the attributes have been added. It links and validates the program, then retrieves the link status. If the link failed, we immediately return NO. If the link operation happened successfully, we delete the two shaders and then return YES to indicate success. The reason we don't delete the shaders when the link fails is so our object will still have access to the shader logs, and we can debug if anything goes wrong. Once a shader is deleted, its log are gone as well.

- (BOOL)link
{
GLint status;

glLinkProgram(program);
glValidateProgram(program);

glGetProgramiv(program, GL_LINK_STATUS, &status);
if (status == GL_FALSE)
return NO;

if (vertShader)
glDeleteShader(vertShader);
if (fragShader)
glDeleteShader(fragShader);

return YES;
}

The method to make this program the active one is called use, and it does nothing more than call a single OpenGL ES method, passing in program:

- (void)use
{
glUseProgram(program);
}

The next four methods are the log methods. As I mentioned earlier, we have one private method that handles the functionality, and that method is called by the three public method. The way you get logs from OpenGL ES is a little old school. No, it's a lot old school. It's practically medieval. We have to first ask OpenGL ES to tell us how long the specific log we're interested is, then allocate a buffer to hold that much data, then retrieve the long into that buffer. In our case, we then turn those characters into an NSString and free() the buffer before returning the NSString instance with the log data:

- (NSString *)logForOpenGLObject:(GLuint)object 
infoCallback:(GLInfoFunction)infoFunc
logFunc:(GLLogFunction)logFunc
{
GLint logLength = 0, charsWritten = 0;

infoFunc(object, GL_INFO_LOG_LENGTH, &logLength);
if (logLength < 1)
return nil;

char *logBytes = malloc(logLength);
logFunc(object, logLength, &charsWritten, logBytes);
NSString *log = [[[NSString alloc] initWithBytes:logBytes
length:logLength
encoding:NSUTF8StringEncoding
]

autorelease]
;
free(logBytes);
return log;
}

The next three methods are the public log methods, and they all just call the private method above:

- (NSString *)vertexShaderLog
{
return [self logForOpenGLObject:vertShader
infoCallback:(GLInfoFunction)&glGetProgramiv
logFunc:(GLLogFunction)&glGetProgramInfoLog
]
;

}

- (NSString *)fragmentShaderLog
{
return [self logForOpenGLObject:fragShader
infoCallback:(GLInfoFunction)&glGetProgramiv
logFunc:(GLLogFunction)&glGetProgramInfoLog
]
;
}

- (NSString *)programLog
{
return [self logForOpenGLObject:program
infoCallback:(GLInfoFunction)&glGetProgramiv
logFunc:(GLLogFunction)&glGetProgramInfoLog
]
;
}

Finally, we have our dealloc method, which releases the mutable array, then checks the two shaders and the program, and if any of them are non-zero, that means there's an OpenGL ES object that needs to be deleted, so we delete them.

- (void)dealloc
{
[attributes release];
[uniforms release];

if (vertShader)
glDeleteShader(vertShader);

if (fragShader)
glDeleteShader(fragShader);

if (program)
glDeleteProgram(program);

[super dealloc];
}

Got all that? Good. Now you can forget most of it. You need to remember what programs, shaders, attributes, and uniforms are and how they relate to each other, but you can forget about the nitty-gritty details of creating programs. Some kind of explanation or encouragement here? Why was it important to go through all that?From now on, just use GLProgram to load your shaders. Let's take a look now at how to use GLProgram.

Using GLProgram

Using the GLProgram object we just created is relatively easy. You first allocate and initialize an instance by providing the names of the two files containing the shaders' source code, leaving off the .vsh and .fsh extensions.

GLProgram *theProgram = [[GLProgram alloc] initWithVertexShaderFilename:@"Shader"
fragmentShaderFilename:@"Shader"];

Next, add any attributes used in your shader to the program. If you had two attributes, one with the position of each vertex called position, and one with a color for each vertex called color, your code to add attributes would look like this:

[program addAttribute:@"position"];
[program addAttribute:@"color"];

After you add the attributes, link the program. If the program is unable to successfully link, it will return NO and you should dump the logs to the console so you can debug the problem. Once you've dumped the logs, it's a good idea to release the program in order to free up the memory they were using, and so you don't try to use an invalid program:

if (![program link])
{
NSLog(@"Link failed");
NSString *progLog = [program programLog];
NSLog(@"Program Log: %@", progLog);
NSString *fragLog = [program fragmentShaderLog];
NSLog(@"Frag Log: %@", fragLog);
NSString *vertLog = [program vertexShaderLog];
NSLog(@"Vert Log: %@", vertLog);
[program release];
program = nil;
}

If the link process was successful and returns YES, retrieve the uniform and attribute indices and then call use to start using the shaders in this program to draw objects:

GLuint positionAttribute = [program attributeIndex:@"position"];
GLuint colorAttribute = [program attributeIndex:@"color"];
GLuint translateUniform = [program uniformIndex:@"translate"];
[program use];

Once you've called use, you're ready to start submitting uniform and attribute data.
Sending Attributes and Uniforms to the Shaders
We're almost ready to try out the OpenGL ES programmable pipeline by writing our first complete OpenGL ES application, but before we do that, we need to talk about how we actually ship the attributes and uniforms over to the shader. It's actually a straightforward process, although it can look intimidating when you first see it in code. The process for attributes and uniforms are slightly different, so we'll look at them individually.
Submitting Uniforms
Let's look at uniforms first because they're a little easier to grok. After we link our program, we retrieve the index value for each of our uniforms and save that index value so we can use it to submit the data for that uniform to OpenGL ES:

GLuint translateUniform = [program uniformIndex:@"translate"];

Once we have the index value (translateUniform in the previous line of code), we just have to use a function called glUniform() to submit the data for that uniform to our shaders. glUniform() is one of those “alphabet soup” functions that comes in many different forms. Because the shaders run on the GPU, and the GPU deals primarily with floating point numbers, all of the glUniform() variants take one or more GLfloats or one or more pointers to GLfloats.³ To send a single floating point value as a uniform, for example, we'd select either glUniform1f() or glUniform1fv(), depending on whether we needed to send a GLfloat or a pointer to a GLfloat. If we wanted to send a single vertex, which would be represented by a vec3 in the shader, we'd choose either glUniform3f() or glUniform3fv().

Regardless of which of the glUniform() variants we choose, the first argument we pass needs to be the index of the uniform we're submitting data for, which is the value we earlier retrieved from uniformIndex:. When using the non-pointer variants of glUniform() (the ones with a name ending in f), the uniform index is followed by the data being submitted in the proper order. If we need to submit a vertex's location, for example, we would submit the value for X as the second argument, the value of Y as the third argument, and the value of Z as the fourth argument. So, to pass a single non-pointer value using glUniform(), we would do this:

glUniform1f(translateUniform, 25.3f);

To pass a vertex with three values to a uniform, we'd do something like this:

glUniform3f(vectorUniform, 2.3f, -1.34f, 0.34f);

When using the glUniform() variants that end in v, we follow the uniform's index with the size of the data being passed and then a pointer to the actual data, like so:

GLfloat vertex[3];
vertex[0] = 2.3f;
vertex[1] = -1.34f;
vertex[2] = 0.34f;
glUniform3fv(vectorUniform, 1, vertex);

In effect, this code is exactly identical to the previous example, but we're using one of the pointer variants of glUniform() to pass all three values of the array that will make up the vec3 uniform. The gotcha here is the size argument. You might think you should pass 3 in the above example because the vertex array has a length of three, but because we're using glUniform3fv(), we specify 1 because that method assumes that each data element is already three GLfloats long. If they were shorter, you'd be using glUniform2fv() or glUniform1fv().
Submitting Attributes
Sending attribute data to the shader is only slightly more involved than submitting uniforms. First, you have to retrieve the index value for the attribute, like so:

GLuint positionAttribute = [program attributeIndex:@"position"];

Just like with uniform indices, you'll want to do this only once, if possible, and definitely not every time you draw because it invokes a string comparison operation as it loops through the attributes array, which is computationally expensive. Once you have the index, you need to submit the data using the OpenGL ES function family called glVertexAttrib(). Like glUniform(), glVertexAttrib() is an alphabet-soup function with many different versions. However, since you're almost always going to be sending a large array of data to the shader when you're working with attributes, in practice, you almost always use the same function: glVertexAttribPointer(), which allows you to submit a variable length block of data to the attribute in your shader. Here's what a typical call to glVertexAttribPointer() might look like:

glVertexAttribPointer(positionAttribute, 3, GL_FLOAT, 0, 0, vertices);

The first parameter (positionAttribute in the previous line of code) is the index that corresponds to the attribute we're providing data for, the one we just retrieved from attributeIndex:. The second parameter (3 above) tells OpenGL ES how many data elements there are for each vertex or, in other words, how big a chunk of the submitted data should get sent to each run of the vertex shader. So, if you're submitting vertex position data (x,y,z), each run of the vertex shader needs three data elements, and this argument should be set to 3. If you are sending a color (r,g,b,a) for each vertex, each run of the vertex would need four elements and you would pass 4. The next argument tells OpenGL ES what type of data is submitted. In this case, each vertex is comprised of three GLfloats, so pass GL_FLOAT to tell it that. In theory, there are several values you could pass here, but in reality, since attributes have to be made up of one or more floating point variables, you will always pass GL_FLOAT.

The fourth argument to glVertexAtttribPointer() can be ignored. Just pass 0 for this argument. This element is used only with GL_FIXED datatypes. The GLfixed datatype allows you to represent a floating point value using integers, which allows for speedier calculations on systems that are slow at floating point operations, either because there's no GPU, or because the GPU internally uses fixed point representations of data. All iOS devices have a GPU that internally use floating point representations and which are capable of doing fast floating point math, so we'll never use the GLfixed datatype when programming for the iOS, and you don't have to worry about whether fixed point data values are normalized when sent to the shader.

We're not going to look at the fifth element right now. We're just going to pass 0 for this element for the time being. The fifth argument is known as the stride argument, and it can be used to pack more than one kind of data into a single block of data. We could, for example, pack our colors and vertices into a single interleaved array and use the stride argument to let OpenGL ES know to skip the color data when passing vertex attributes to the shader. We'll look at how to do data interleaving in the chapter on optimizing performance.will add x-ref when chapter exists

The final argument to glVertexAttribPointer() is the pointer to the actual data we're submitting for this attribute. This is going to be either an array or a pointer to a chunk of memory created with alloc().

For each attribute, there's a second call we have to make called glEnableVertexAttribArray(), passing the index of the attribute we're enabling. By default, all attributes are disabled, so we have to specifically tell OpenGL ES to enable an attribute in order for it to ship that attribute's data across to the shader. You can actually call this function just once after you link your program and then never worry about it again. However, it's a very low overhead call, and if an attribute were to get disabled somehow, it would be a very difficult problem to figure out. As a result, it's not uncommon to call glEnableVertexAttribArray() every time you draw to ensure that the attribute data gets sent to the shader.

Once your attributes have all been enabled and you've submitted your attribute data, you're ready to tell OpenGL ES to draw.

Drawing to a Close

The process of telling OpenGL ES to start rendering is kicked off by one of two function calls: glDrawArrays() or glDrawElements(). There are many nuances to drawing, and the easiest way to learn those nuances is to use the two functions. In the next chapter, we're going to build our first app, which will use glDrawArrays(). A few chapters later, we're going to learn about glDrawElements() and the reasons for having two different drawing functions in the first place.

In this chapter, we've taken a look at OpenGL ES 2.0's programmable pipeline. You learned that vertex shaders run once for every vertex and that fragment shaders run once for every fragment, which corresponds to once for every pixel that will be drawn. You also saw what simple fragment and vertex shaders look like, and learned how to compile them and link a shader pair into a single OpenGL ES program. You even saw how to pass data from your application code to the shaders using uniforms and attributes and how to pass data between the vertex shader and the fragment shader using varyings.

This has been a long chapter, but now we're ready to put the programmable pipeline to use. Take a deep breath, pat yourself on the back for getting through all this boring, but important, up-front stuff.


1 - Actually, the iPad and iPhone 4 use Apple's A4 “system on a chip,” which has an integrated CPU and GPU on a single chip instead of separate CPU and GPU chips, but that doesn't really affect how you program for it, nor does it change the basic way that the two processors interact with each other.

2 - The chip in all iOS devices is capable of doing fast floating point math using something called vector processors. Your code doesn't automatically take advantage of the vector processors, however, so this statement is true, generally speaking. We'll look at how to leverage the vector processors in application code that runs on the CPU in the chapter on optimization.

3 - There's also a related function called glUniformMatrix() that we'll look at when we discuss matrix transformations in a future chapter. We don't want to get ahead of ourselves, but matrices are really nothing more than two-dimensional arrays of GLfloats.