FW got new website / 3 amazing app videos / FW SDK 2.3

This post will be a bit more personal than usual, as this is another important milestone for FlashyWrappers.

If you want to skip to FW SDK 2.3 release info, scroll down

———————————————–

It’s finally here – everything FlashyWrappers related was moved to a new website at

http://flashywrappers.com/

In other words, FW has got its own website! This will allow for better expansion of the whole FlashyWrappers project and bringing the community of FW users together. There are always so many new ideas and features and fixes people need I doubt the project will stop anytime soon (despite what I thought in the beginning).

Not looking at the sales, the project always has been fun for me by offering sometimes very frustrating – yet rewarding – challenges, and that’s why I’m still working on it. The first 6 months working on and off on FlashyWrappers, I think I can reveal it made about 3 sales (it was the ancient version for $97, with 1 platform available). I can assure you, I would never continue working on it if it was just “business project”.

Since then the business situation has improved, luckily 🙂 But I think its clear that  there was always something drawing me back to this project, regardless of any sales or demand. It really was always more about those technical challenges. Can I capture AIR’s OpenGL on iOS, with 1 / 100 experience on iOS. Can I do the same on Android. And so on. The combined challenge of discovering new platforms, optimization techniques to beat the older and slower versions and really challenging myself combined with the visual reward – that is whats keeping me in.

Of course, I was hoping that some people would find this whole undertaking useful (I wouldn’t enjoy doing something people won’t use, in the long run). The first SWC I created was for my own video capturing project. And so when I found out much later (early this year) an app installed in Ray Ban booth (in Brazil!) will run on FlashyWrappers iOS, I was pretty excited.

FlashyWrappers inside
I mean really inside (on those iPads)

Anyway,  the new website contains a seed of showcase videos – the first 3 introducing Ray Ban app, ShadeScout app and Opera Maker app (all iOS). if you’ve got an app powered by FlashyWrappers, be sure to contact me through flashywrappers.com contact form! The new website will centralize everything FW related – I’m hoping to add forums for technical support later on (in case the user number grows even more).

This is a pretty niche market but I have the desire to improve FlashyWrappers step by step – still many challenges remain unsolved (if Keith is reading this, he knows… 🙂 and I think I won’t rest until I have the main priorities sorted out and people still need the library. There’s also room for expansion, the underlying libraries are native iOS / Android / Windows / Mac code so they don’t need to depend on AIR in the long run.

I’m happy the new website is finally up, it wasn’t as much work but the situation in my personal life is really crazy (in a bad way) so its good to see something coming together.

All the best with FlashyWrappers!

Pavel Langweil

———————————————–

FlashyWrappers SDK 2.3 was released, together with the new Android ANE preview! Grab it at the new website linked above.

FlashyWrappers 2.3
-------------------
- !! NEW PLATFORM PREVIEW ADDED: Android(HW) first look - Android ANE with mp4 hardware accelerated encoding and OpenGL fullscreen capture from AIR. This works only on Android 4.3+. No audio support / no captureMC support (audio support coming in next release). Do not use in production but let us know if it doesn't work on your device, with device log attached.
- Sound Mixer: FWSound mixer included into the package as it's now "SDK", so it should contain everything
- iOS: Can specify filename for saving the mp4 to the application storage. You will be responsible to erase these files if needed (instead of the default temp filename which is being reused).
- iOS: Fixed mp4 metatags from QuickTime to MPEG4 for better compatibility - it's possible to play the videos on Android now.
- iOS: Changed internal "hijacking" of AIR's OpenGL FBO - increases performance and reliability of the whole solution.
- iOS: Added support for recording from GPU mode in standard / high resolution in addition to Direct mode.
- iOS: Added "passthrough" encoding level for captureMC(keeps the movie exactly at the bitrate specified in init).

Android – hardware acceleration & MP4 first test

The very first version of Android hardware accelerated encoder utilizing MediaCodec is working. This is rough and not fit for release yet, but it proves the same simple concept implemented on iOS works on Android as well.

This ANE, though not guaranteed to work as fast as the iOS version at first, will be still a significant upgrade to the current software based Android ANE. Based on the initial test ($100 ASUS 4.3 tablet):

New ANE

  • 1024×768 video of AIR stage recorded
  • Stage set to 20fps, no visible lag observed

Old ANE

  • 400×300 video of AIR stage recorded
  • Observed stage speed when recording drops to ~2-3fps

Finally, the video being recorded is MP4. No H.264 / AAC license issues since we’re using the device encoders instead of software encoders. Which means, the video can be replayed right away.

Otherwise, the whole concept of the ANE is similar to iOS – fullscreen init, capturing AIR’s OpenGL content directly via captureFullscreen and internally sending it straight to video. FlashyWrappers is using advanced features of Android 4.3+, such as render to video from OpenGL surface to achieve this. Finally, here’s a short test video to celebrate the first Android fully accelerated test!

 

 

Merry Christmas & H264 / MP4 for desktop news!

Merry Christmas to everyone!

These days before Christmas have been hectic for various reasons – largely supporting numerous FlashyWrappers customers (or people interested in FW) in regards to video replays. I recognize this is a pretty big issue and as far as desktop is concerned especially(where .ogv videos are currently supported). As you probably know, OGV videos can’t be played in Flash Player. There are several alternative plans to deal with this for current customers, including creating OGV video player.

However, by far the biggest demand there is when it comes to encoding to MP4 files. And understandably so, as MP4 is the only viable format to be played natively on any mobile device these days. Up until recently, there has been issues with doing that as the x264 encoder (used together in ffmpeg, on desktop FlashyWrappers) would effectively mean recompiling FFmpeg in GPL mode. This would not only mean forced GPL license on the whole of FlashyWrappers, but also forced GPL license for your software (as a customer of FlashyWrappers). This has always been the biggest issue for me really. I don’t really like to force any licenses on my customers and I’m sure many customers would not expect that in the first place.

The situation on mobile devices with iOS or Android is different – hardware H264 encoders can be utilized on those devices and nobody has to worry about licensing or royalty payments.

A change is coming to FW on desktops though. I’ve been researching quite alot around the OpenH264 lib in recent months. I came to the conclusion that it is pretty safe to include the library into FlashyWrappers (or rather, create another branch of FlashyWrappers desktop with official H264 support). So what are the catches? I’m not a lawyer and this is only based on my personal research, but:

1) First let me say that the good news is OpenH264 licenses both for their source code and binaries is BSD. This is absolutely non-restrictive license regardless of commercial or non-commercial use. That means, no license forcing you to make your software open sourced. That’s the most important thing

2) CISCO paid licensing fees for the binaries they are building from their sources. From what I understood, the party distributing the encoder must pay fees to MPEG LA. To make sure CISCO is the distributor all the time, you are required to make your app download their binaries before using them. That way CISCO is the distributor, you are not. This would be applicable only for AIR of course, where AIR could call the externally downloaded EXE file.

3) I will however go the other way first – including CISCO source code into one FlashyWrappers branch. Which will mean SWC / ANE files with no binaries to download. This is of course the only way for Flash Player based apps which can not download or execute any binaries, and as a side effect it will also work in desktop AIR, which is why this way is the priority.

So what about the royalty payments?

From what I personally understood, the royalty payments are zero for 0 – 100,000 units / installs per year of the encoder. I’m sure that many if not most of my customers (and me) won’t go above that number. This number is included in the MPEG LA license which you can also check, and I found confirmations on several forums that this is true (since 2005).

In case you get over 100K units then the fees are really not as horrible as you might have imagined – something like $0.2 / $0.1 per unit depending on how many units(installs) you sell.

Having said that, I feel it is safe to provide FlashyWrappers for desktop with OpenH264. I’ll need to find the time along the Android version (and there might still be issues with compiling this using FlasCC / Alchemy 2) but I’ll at least start the attempt very soon.

The only unclear part for me so far was whenever or not you’d need to obtain a license still from MPEG LA, but I’m assuming that BSD license of CISCO applies in this case (and CISCO dealt their license with MPEG LA). I can’t imagine this being a huge issue though.

Coming back to video replays – having H264 support should mean the ability to introduce video replays natively. Two big issues squished in one go 🙂

FW 2.21 released

FlashyWrappers 2.21 was released! Grab it here: http://rainbowcreatures.com/product_flashywrappers.php

All of the changes and fixes included are exclusively for the iOS version, which needed some stabilizing before starting on Android or iterating on the other platforms. From changelog:

- iOS: Fixed an issue generating internal OpenGL error which would demonstrate usually as errors in Stage3D when creating vertex buffers during capture
- iOS: Fixed a bug which caused the screen go black for various seemingly random reasons, such as texture upload (only FW logo would be visible in the free version)
- iOS: Fixed the wrong video orientation outside of iOS
- iOS: Fixed buggy Retina implementation causing the screen to possibly scretch while capturing
- iOS: Added different app / movie fps handling for captureFullscreen (you don't need to worry about your target fps now - only on iOS so far!)
- iOS: Audio is being captured directly into mp4 stream. Capturing to temp WAV stream and merging later with video is still supported if needed to save few fps while recording as well
- iOS: No longer needed depthAndStencil enabled for captureFullscreen to work (this was related to a bug which has now been fixed)
- iOS: You can now call captureFullscreen() all the time on ENTER_FRAME, it will capture only after myEncoder.init() and stop after myEncoder.encodeIt()
- iOS: Added platform default for testing outside of iOS

Extra for iOS: Hungry Hero (starling game) integration example

Extra for iOS: Latest version of FWSoundMixer for iOS inside the HH integration example

There will still be work going on iOS – hopefully to get rid of needing FWSoundMixer for audio recording & eliminating the video post processing time & fix any new bugs that appear.  As you might have known there was also work going on FWSoundMixer for iOS. This is also close to finish, but the microphone issue needs to be stabilized / more understood first to make a more “stable” release. Nevertheless, the latest ANE of FWSoundMixer is included in FW 2.21 (as that case doesn’t need microphone), in the HH Starlig iOS example.

Also in the plans is further simplifications of the API for all platforms.

 

Microphone trouble

Althought the iOS issues in the encoder were solved, there is one more thing – in the related FWSoundMixer for iOS, which is going to be released as well in the next batch. Everything is working fine, except for when recording with microphone. As soon as sound mixer starts recording microphone, the rest of the audio playback goes laggy. This results in more strange things as a result when playing back the video(mic being my favorite thing – out of sync 🙂 ).

I’m not entirely sure, but it seems to me when using SampleDataEvent for both microphone recording and also as callback for making sound mixer perform “audio step” creates some kind of collision. In other words, SampleDataEvent with microphone capture and SampleDataEvent for playing PCM sound (you can do both), when used together on iOS seem to cause lags. On desktop I cannot either notice it or the CPU is so powerful it’s just not apparent. Also, I might be just doing something wrong.

Right now trying to find a workaround either by eliminating the SampleDataEvent for audio recording or trying to record microphone natively, even that seems to be problematic as expected (fight with AIR over microphone ensues probably). Resolving microphone is important for integration into several apps that I’m about to do.

iOS last known bug fixed!

Managed to fix the bug I was talking about yesterday. Long story short, we are sometimes our worst enemies – and this bug was entirely of my own making(as most bugs are). I had to analyze OpenGL pipeline on iOS and compare several cases to finally spot the obvious thing (and before that I had to comment out lines from Starling source for hours to isolate the problematic command in AS3 :). I was passing the wrong argument to OpenGL function. This caused the ANE working only under certain conditions, but not working under other conditions. As soon as texture has been uploaded before init, it would break the “needed” conditions. Internally, the texture ID had to equal FBO ID, because I was using one instead of another.

Anyway, this fixes the seemingly random “black screen” issue! So we’re back in “release mode” with Hunger Hero Starling demo and the next FW. No more examples / features will make it into the next release.

The last part for me was to implement the recording only after a level is started (and end after level is lost / hero dies). If this works (and after this fix, it should, because it won’t matter when “init” is called anymore) – I’ll start preparing the new release package, with the new Hungry Hero sample included.

I’m sorry this release is taking longer than expected(and sorry to everyone waiting for some kind of demo / help we discussed over e-mail) but I really needed to nail down these big iOS issues. As a result 3 major issues killed on iOS (this upload texture interference, earlier vertex buffer upload interference, problems with audio sync / timing). I won’t get everything in that I wanted(killing the video post-processing will be more tricky than I thought for example), but gaining stability is #1 goal for this release. Everything else can be only improved from then on 🙂

Quick status update

In case you’re wondering whats currently going on: I’ve been hunting down a strange bug which demonstrated with my Starling game sample on iOS. It demonstrated only when FW was initialized at frame greater than zero (0). If late initialization happened, black screen was displayed while capturing (with FlashyWrappers logo). This is last *known* serious bug needed to be fixed on iOS. Of course I can’t rule out more Stage3D related bugs as the OpenGL might be intricately interacting with the ANE’s changes, but hopefully not.

But even from this bug it seems, if I move internal initialization of the texture cache and other OpenGL related things before anything else happens, it is fine. Which makes sense, at startup there’s little OpenGL code coming from Starling/Stage3D to collide with the initialization inside the ANE. I still want to see the cause of the collisions (in my ideal world I should be able to call the init late and it should avoid all issues / collisions).

But if I can’t prevent  / figure out the collisions quickly, I’ll just force / move the custom OpenGL inits into early code which will be called either automatically or manually by some method, as “workaround” to prevent any similar collision related bugs on init.

Many / slashes / in / today’s / post 🙂

iOS audio done!

It seems the most serious issues with audio sync on iOS have been fixed. By capturing video frames on ENTER_FRAME, it was necessary to insert some kind of time “gateway” as when to write a frame or not given the desired target video fps. In general, you want the library to write every other frame if you’re targeting lower fps than your game is running at (which you should). As soon as this was done, the sync issues went away. This gateway is in the native code, taking advantage of the precise measurements of time elapsed on iOS.

But majority of work was spent on rewriting the “temporary” audio recording into WAV. Now both the audio recording and encoding are happening realtime, the audio is being encoded into AAC and inserted into the MP4.

So to sum up, now the audio seems to be in sync all the time, for any video fps. For example your gameplay is 60fps, your video is 24 fps, your audio should still be in sync within the 24 fps video, even originally it was in sync at 60fps. The only exception would be if you tried to record at fps higher than the game is capable to render. That might mess things up. For example if you recorded the video at 60fps but your game rendered only at 50fps, you would be “missing” video frames and got the out of sync with audio again.

This is a situation you don’t want to get into in the first place. You might just play it safe and set the video fps to something like 20fps. Or even better, you can do a little test recording, measure the fps and set your “real recording” target fps 15%-25% lower to leave a buffer for any “bumps” in the fps (or perhaps if the target is under acceptable number such as 20, resolution of video would need to be decreased as well).

Which is a good idea for a new library feature. One method to determine the ideal fps for your videos so you’re not stuck with one fps, but instead people with more powerful iPads can record at better quality 🙂

Anyway, coming up before the next version release are still 1 or 2 fixes. Right now I’m trying to completely get rid off post processing of the video on iOS. It can take quite a long time(think ~10 seconds for 1 minute of video recording…on one of the latest iPads). It’s async, so it doesn’ t freeze the game or anything, but it’s still annoying to wait after the recording was finished. This is mostly only to flip the video upside down, because of OpenGL. In the next iOS release, flipping will very likely be done straight on the texture by OpenGL to avoid the costly post processing.

 

 

Reworking iOS audio…

So the audio recording for iOS was practically finished today, FWSoundMixer was modified to deal with some stuff breaking the recording in HungryHero(and other games/apps in the future as well). I’ve got the HH gameplay video with audio saved in camera roll, was almost ready to publish the video to YouTube (and here), when I noticed the audio getting slightly delayed / out of sync. I wasn’t too sure if this delay was “fixed” (that would be less serious) or growing. Unfortunately, it was growing – started to be apparent when I made the recording longer.

I realized the method I’m using for recording audio on iOS currently might not be ideal. The ANE is recording all the audio into temporary WAV file – this is composed by AVFoundation with the video track after the recording is finished. The obvious problem is, that this raw WAV file doesn’t contain any info about time when the audio packets are coming in. I’m still not exactly sure why the video tends to be a bit shorter than the audio after recording(apparently losing video frames maybe because of lags), in any case because there’s no way for AVFoundation to sync audio/video, it just slaps them together and the longer audio track causes it to go out of sync with the video slowly.

Rather than trying to even the tracks I decided it’s not good to rely on ideal recording situation anyway.

Long story short I’ll need to try recording all the audio with AVFoundation & encode it on the fly (like the video) so that its composed as its coming in. I hope AVFoundation takes care of the interleaving & mixing, just as FFmpeg does in the desktop versions of FlashyWrappers.