Quick status update

In case you’re wondering whats currently going on: I’ve been hunting down a strange bug which demonstrated with my Starling game sample on iOS. It demonstrated only when FW was initialized at frame greater than zero (0). If late initialization happened, black screen was displayed while capturing (with FlashyWrappers logo). This is last *known* serious bug needed to be fixed on iOS. Of course I can’t rule out more Stage3D related bugs as the OpenGL might be intricately interacting with the ANE’s changes, but hopefully not.

But even from this bug it seems, if I move internal initialization of the texture cache and other OpenGL related things before anything else happens, it is fine. Which makes sense, at startup there’s little OpenGL code coming from Starling/Stage3D to collide with the initialization inside the ANE. I still want to see the cause of the collisions (in my ideal world I should be able to call the init late and it should avoid all issues / collisions).

But if I can’t prevent  / figure out the collisions quickly, I’ll just force / move the custom OpenGL inits into early code which will be called either automatically or manually by some method, as “workaround” to prevent any similar collision related bugs on init.

Many / slashes / in / today’s / post 🙂

iOS audio done!

It seems the most serious issues with audio sync on iOS have been fixed. By capturing video frames on ENTER_FRAME, it was necessary to insert some kind of time “gateway” as when to write a frame or not given the desired target video fps. In general, you want the library to write every other frame if you’re targeting lower fps than your game is running at (which you should). As soon as this was done, the sync issues went away. This gateway is in the native code, taking advantage of the precise measurements of time elapsed on iOS.

But majority of work was spent on rewriting the “temporary” audio recording into WAV. Now both the audio recording and encoding are happening realtime, the audio is being encoded into AAC and inserted into the MP4.

So to sum up, now the audio seems to be in sync all the time, for any video fps. For example your gameplay is 60fps, your video is 24 fps, your audio should still be in sync within the 24 fps video, even originally it was in sync at 60fps. The only exception would be if you tried to record at fps higher than the game is capable to render. That might mess things up. For example if you recorded the video at 60fps but your game rendered only at 50fps, you would be “missing” video frames and got the out of sync with audio again.

This is a situation you don’t want to get into in the first place. You might just play it safe and set the video fps to something like 20fps. Or even better, you can do a little test recording, measure the fps and set your “real recording” target fps 15%-25% lower to leave a buffer for any “bumps” in the fps (or perhaps if the target is under acceptable number such as 20, resolution of video would need to be decreased as well).

Which is a good idea for a new library feature. One method to determine the ideal fps for your videos so you’re not stuck with one fps, but instead people with more powerful iPads can record at better quality 🙂

Anyway, coming up before the next version release are still 1 or 2 fixes. Right now I’m trying to completely get rid off post processing of the video on iOS. It can take quite a long time(think ~10 seconds for 1 minute of video recording…on one of the latest iPads). It’s async, so it doesn’ t freeze the game or anything, but it’s still annoying to wait after the recording was finished. This is mostly only to flip the video upside down, because of OpenGL. In the next iOS release, flipping will very likely be done straight on the texture by OpenGL to avoid the costly post processing.

 

 

Reworking iOS audio…

So the audio recording for iOS was practically finished today, FWSoundMixer was modified to deal with some stuff breaking the recording in HungryHero(and other games/apps in the future as well). I’ve got the HH gameplay video with audio saved in camera roll, was almost ready to publish the video to YouTube (and here), when I noticed the audio getting slightly delayed / out of sync. I wasn’t too sure if this delay was “fixed” (that would be less serious) or growing. Unfortunately, it was growing – started to be apparent when I made the recording longer.

I realized the method I’m using for recording audio on iOS currently might not be ideal. The ANE is recording all the audio into temporary WAV file – this is composed by AVFoundation with the video track after the recording is finished. The obvious problem is, that this raw WAV file doesn’t contain any info about time when the audio packets are coming in. I’m still not exactly sure why the video tends to be a bit shorter than the audio after recording(apparently losing video frames maybe because of lags), in any case because there’s no way for AVFoundation to sync audio/video, it just slaps them together and the longer audio track causes it to go out of sync with the video slowly.

Rather than trying to even the tracks I decided it’s not good to rely on ideal recording situation anyway.

Long story short I’ll need to try recording all the audio with AVFoundation & encode it on the fly (like the video) so that its composed as its coming in. I hope AVFoundation takes care of the interleaving & mixing, just as FFmpeg does in the desktop versions of FlashyWrappers.

iOS amazing iPad1 benchmarks

UPDATE: Managed to resolve the vertex buffer issues! The ANE was calling nonsense argument at one point which triggered OpenGL error, which demonstrated only when creating vertex / index buffer arrays in Stage3D. I won’t bother with technical details(thanks to Apple Instruments for OpenGL ES though!). Suffice to say no more Stage3D errors when recording Hungry Hero, hurray :). One step closer to the next release.


 

Part of resolving the issues I mentioned in the last post is testing the libraries on iPad1 as well.

I’ve tested AIR video recording with the Starling Hungry Hero game on this poor old device today and the results are (note, the first fps is the video recording fps, the second fps is the observed / measured gameplay fps while recording):

Realtime audio recording on (with sound mixer recording all the game sounds / music – this guy is eating a bit of CPU himself)

iPad1 1024 x 768 @60fps: ~30fps

iPad1 1024 x 768 @24fps: ~35fps

Realtime audio recording off (without sound mixer – you might add background music to video still)

iPad1 1024 x 768 @ 60fps: ~34fps

iPad1 1024 x 768 @24fps: ~40fps

 No recording at all (audio / video)

iPad 1: ~60fps

As you can see even the video encoding takes pretty drastically like 20-30 fps off your game, the fact that it works on such a slow device as iPad1 is to me mindblowing, especially remembering the earlier results without any AIR OpenGL capturing. Pretty much it had issues even on iPad Retina with half resolution recording at 20 fps.

Also, even in the worst case it’s keeping the game pretty playable at 30fps on iPad1.

…so now “just” to get rid of these issues I discovered yesterday.

But to end on positive note – you saw I’m mentioning “realtime audio recording”? 🙂 Yes, it ALMOST works now. The only remaining issue is that the audio doesn’t want to merge with the video when the video is flipped upside down (this is a needed thing as we’re capturing from OpenGL where everything is flipped..so we need to flip back). I laughed at this one a bit, maybe the audio has nausea from all the flipping. Anyway, I think / hope this should be pretty easy to resolve.  Much easier than the vertex / index buffer issue which I’m working on right now. I’m about to analyze iPad’s OpenGL hoping to see what’s going on under the hood in terms of what AIR is calling, to open that “black box” and figure out where I might be messing it up.

 

iOS issues update

In case someone is already testing the beta iOS accelerated recording (captureFullscreen) mode (likely you come from Starling forums), please be aware of several issues I’ve discovered in the recent days, while integrating the Hungry Hero game:

  • When recording from Starling (and possibly Stage-3D only) project, calling myEncoder.init later then after creating Context3D will produce black screen with FlashyWrappers logo when recording. Ie. if you’re calling init in the middle of rendering (as you normally would) this will happen. I know how to fix it “temporarily” with a simple workaround, I’m not sure exactly why it happens at this point however so I can’t bring “proper” solution quickly. Part of the problem is AIR being essentially a black box doing stuff with OpenGL which I must anticipate / figure out.
  • Vertex & index buffers creation is failing during & after capturing video (not before). This was not apparent right away, as the game appears to be working normally but something in the ANE is breaking something else in OpenGL that AIR expects. I’m investigating what this could be. The ANE is creating another set of internal vertex/index buffers in the ANE for rendering the quad with AIR framebuffer to the screen,  I’m not sure how that could interfere however as I made sure to get currently bound buffers / rebind them after using my own.

Unfortunately things like these are more expected with Stage3D as Stage3D is pretty much a wrapper for OpenGL ES and therefore people might manipulate more things in the “AIR blackbox” that might get interference from my ANE somehow. With MovieClip projects there is pretty much a neverchanging render loop so everything should be stable in those projects. I’m already planning on releasing a new FlashyWrappers version addressing these issues (they delayed me from audio recording as they are high priority at the moment).

iOS update & starting Android

I’ve managed to fix couple of things in the current FW iOS, namely 2 things:

– The issue with different video / stage fps. The library now handles this difference automatically, trying to make sure the video is recorded at the right speeds. More details about this will be available after the release of the next iOS version. This was critical as the rendering and recording in the accelerated fullscreen mode is coupled in one command. I’ll likely replicate this approach in other platforms as well, as the programmer might be assuming the library is handling this already.

– Orientation issues: The video was flipped horizontaly after exporting from iOS. On iOS this looked right, because it was setting prefered transforms for the iOS player. All other players played the video flipped upside down. The issue has been resolved by replacing this with “definitive” flipping in post-processing the video.

Continuing to work on iOS FWSoundMixer to enable game audio recording support to be as fast as possible on iOS.

As for Android, I got back to the tests I started about a month ago. I will report more on Android in the following days and weeks – currently I’m calling test method from Flash to launch hardware accelerated test video encoding (about 30 frames). The results are – corrupted H264 stream when testing in Emulator, crash when testing on my phone 🙂  As you can see Android won’t be easy, but I’m determined to get it working at least for 4.3+ devices.

FW iOS Starling integration, version 1

Currently I’m working on integrating FlashyWrappers into the “Hungry Hero” open source game. This is an important step towards testing the framework towards popular AIR / Flash game engines for iOS, and not only for iOS.

So the good news is: it works!

Check out Hungry Hero recorded on iPad mini Retina, fullscreen recording 1024 x 768 @60fps, you can see the game fps in lower left (it’s not too different from when I’m playing the game without recording):


The bad news is, the setup might be a bit more tricky than I thought and there are same cases where it seemingly doesn’t work unless the user knows what to do (which they don’t). Otherwise, they end up recording (and seeing) black screen with FlashyWrappers logo. Not too exciting!

Of course, I found out less serious, but important stuff I didn’t think about when developing the first version of the iOS extension. Which is only a good thing, because all of those fixes to improve the extension will make it into the next release  pretty quickly (about 1-2 weeks tops). For those who might want to experiment with FW 2.2 iOS & Starling:

1) You need depthAndStencil enabled, which means you need to supply your own Context3D to Starling

There is a nice tutorial in here:

http://wiki.starling-framework.org/tutorials/combining_starling_with_other_stage3d_frameworks

You can pretty much copy the tutorial, except of course you want only one Starling and you must setup backbuffer with depthAndStencil flag set to “true” :

mStage3D.context3D.configureBackBuffer(stage.stageWidth, stage.stageHeight, 0, true);

 2) You must call myEncoder.captureFullscreen() after calling context3D.present()

Otherwise you’ll end up with black screen, most likely. I’m still investigating if there wasn’t anything else causing the black screen issue I discovered today. In any case, the library must be more vocal about something appearing out of order, if possible, or at least the manual needs to be updated about Stage3D related issues.

3) I need to decouple encoding a frame and rendering a frame on iOS

The thing is: If your game is running at 60fps and you want to record at 20fps, you might think – simple, let’s just call captureFullscreen() every 3rd frame. But what you don’t realize is that captureFullscreen() is also rendering AIR’s buffer content back to screen (because in the meantime, AIR’s rendering was redirected). Without doing that, AIR’s rendering is literally crippled and it doesn’t display anything. So if you call captureFullscreen() every 3rd ENTER_FRAME you get recording AND rendering of your game only every 3rd frame. This is obviously an issue, as your game seems to be “lagging”, even in reality its running perfectly fine at 60fps. So what I need to do is call captureFullscreen’s rendering all the time, while captureFullscreen’s recording only sometimes (game_fps / movie_fps times).

There is more stuff I realized but I won’t get into detail because that has to do with audio coming into play.

Nevertheless, that’s why fullscreen capturing is called “beta” 🙂

P.S.: I said AIR’s rendering was crippled…however, after calling encodeIt() the ANE hands over rendering back to AIR, so don’t worry about that too much 🙂

FW iOS audio progress

I’ve been investigating how to easily capture audio in iOS natively(hoping to steal AIR’s audio in a similar way I’m stealing it’s OpenGL rendering), and surprisingly, apart from microphone there seems to be no easy way to do that. I might have missed something (and I saw hints on some methods outside of AVFoundation), but since I’m in rush to release as compact AIR iOS game recording solution as I can, I’ll be turning back to FWSoundMixer, at least for the next release (FW 2.21).

FWSoundMixer will get iOS ANE, which should ensure the mixing will be as fast as possible. Together with that 2.21 will contain complete Starling game demo, with video and audio recording (and saving to camera roll of course) – primarily for iOS, but desktop targets as well likely. No Android still, until it gets HW acceleration.

This will hopefully show all the developers that this is a viable solution for AIR on iOS,  allowing me to try  the integration for myself at the same time. I’m sure I’ll find things I didn’t think of before that need fixing or making better.

In other news, I have realized that setting fps might be a bit problematic for game recording, as most games run at 30-60 fps, while videos are commonly encoded in something over 20fps. If you call captureMC() or captureFullscreen() on ENTER_FRAME at 60fps, I can’t guarantee what happens 🙂 The right way should be to call the function either precisely on every Nth frame(not every frame), or to slow the fps down when recording (which of course is not preferable).

 

Welcome to Rainbow Creatures blog

You’ll find announcements of any new releases or updates of our products here.

And there are things to announce right now 🙂

FlashyWrappers(video encoding Flash / AIR library & native extension) 2.2 was released!

In the unlikely case you overlooked, FlashyWrappers allows you to capture videos from your Flash / AIR apps or games in the fastest way possible (crosscompiled C++ powered in Flash Player, native extensions on other platforms). You don’t need any “media server”, FW is completely self sufficient and your users can save your videos locally or send them somewhere over the internet.

This release brings 2 big updates: iOS and Android releases. FlashyWrappers iOS is fully rewrriten, hardware accelerated ANE for AIR, using AVFoundation and experimentally also OpenGL ES screen capturing directly from AIR. Android is using the code from desktop and will start getting hardware acceleration in the next releases starting with Android 4.3+.

There are number of fixes and improvements namely for Flash Player (SWC) library. Users trying out FlashyWrappers often had issues with building these FlasCC compiled SWC’s in Adobe IDE’s. This has been resolved by separating the library in external SWF while keeping the SWC frontend which loads the library at run-time. This means shorter complation times, no autocomplete issues (freezing), no random “Undefined Reference” errors when compiling and so on.

FWSoundMixer 1.0 was released!

For those who tried FlashyWrappers 2.0, they might have noticed FWSoundMixer. FWSoundMixer allows you to mix playing Flash sounds(Sound class) + microphone if needed, and what’s most important, access the raw PCM data which can be sent to FlashyWrappers video encoder or saved standalone. So you can save any audio recording to a wav file, compress it and send to your server(just an example). Combined with video recording, this can result in interesting apps ( karaoke / singing apps anyone?:-).

The biggest issue in the previous version has been microphone audio lagging and the same building issues like with FlashyWrappers SWC. The good news is, those lagging issues were identified and appear to be fixed now. The library was also separated into SWF/SWC for those having problems when building.

Out-of-the-box solution for adding FWSoundMixer easily to FlashyWrappers video encoding project was added as well(you specify FWVideoEncoder instance in FWSoundMixer init method). Finally, as before, FWSoundMixer is completely free to use.

That’s all regarding the releases. Currently I’m still working on the website after the 2.2 release (links for downloads are being sent out automatically for example) and already have pretty clear goals for the 2.21 release, which will hopefully include some form of native audio recording for iOS, possibly followed by sharing the video to Facebook as first platform to feature that experimentally.

From there, I’ll be moving on to Android HW accelerated encoding – Android is the next big target, and it will be pretty challenging (ie, exciting)!

The overall goal of FlashyWrappers is to bring the most optimized and simple video recording of your Flash / AIR apps or games, for all platforms.