Total Pageviews

Wednesday, November 12, 2014

NFC in Firefox OS

[Reposted from https://hacks.mozilla.org/2014/11/nfc-in-firefox-os/]

Firefox OS is being developed in an open collaboration with Mozilla’s partners and community. In that spirit, and over the course of over a year, Mozilla and Deutsche Telekom (DT) teams worked closely together to develop a platform-level support for NFC within Firefox OS. During that time, both teams had regular product and engineering meet-ups for the end-to-end development cycle.
From proposing the NFC API, to defining the overall architecture, to prototyping and completing a production-level implementation on shipping products, this collaboration model worked so well that it really helped showcase the power of the “open” (Open technology and Open contribution model) for pushing the web forward. After all, this is exactly what Mozilla and Firefox OS stand for.
In this post, we describe a few basics around Firefox OS NFC implementation.

NFC Roadmap

Currently in release 2.0, Firefox OS supports NFC based sharing of contents (contacts, images, videos, URLs), as well as wirelessly reading information stored in NFC enabled tags (tag reading). Our sharing use cases are compatible with NFC enabled devices from other OSes like Android/Windows, so sharing of these contents across these devices would work. Our NFC API (first introduced in v1.3) has been put to use for these sharing use cases in v2.0 with core apps.
The Overall B2G roadmap is available on the wiki.

WebNFC API

The Firefox NFC APIs allow Peer to Peer (P2P) communication between any 2 devices that support NFC Data Type Exchange format (NDEF). NFC passive tags that can present themselves as NDEF compatible tags can also be read and written to. Firefox OS’ NFC implementation is currently for certified applications only, but as stated above, will be opened to marketplace applications as the API is developed to cover more use cases and data formats.

An example using this API

The following does P2P communications between 2 NFC devices (from the NFC API docs on MDN):
// Utility Function for UTF-8 string conversion to Uint8Array.
// Or ideally, simply add this to your webapp HTML to use NfcUtils:
// <script defer src="shared/js/nfc_utils.js"></script>
function fromUTF8(str) {
  if (!str) {
    return null;
  }
  var enc = new TextEncoder('utf-8');
  return enc.encode(str);
}
 
var tnf     = 1;                                              
// NFC Forum Well Known type
var type    = new Uint8Array(fromUTF8("U"));                  
// URL type
var id      = new Uint8Array(fromUTF8(""));                   
// id
var payload = new Uint8Array(fromUTF8("\u0003mozilla.org"));  
// URL data, with a record prefix 0x3 replacing http://
 
var ndefRecords = [new MozNDEFRecord(tnf, type, id, payload)];
var nfcdom = window.navigator.mozNfc;
 
nfcdom.onpeerready = function(event) {
  // event.detail is a session token
  var nfcPeer = navigator.mozNfc.getNFCPeer(event.detail);
  var req = nfcpeer.sendNDEF(ndefRecords);  
// push NDEF message to other NFC device.
  req.onsuccess = function(e) {
    console.log("Successfully pushed P2P message");
  };
  req.onerror = function(e) {
    console.log("P2P push failed!");
  };
};
More such examples that ship with Firefox OS can be found in Using the NCF API.

Current Supported data types

The WebNFC API currently supports NFC Data Exchange Format (NDEF). There are some future plans for Non-NDEF types. From the example above, it is 4 fields, which is defined with 3 optional Uint8Array data types. The TNF and type are used to route the message to the appropriate registered web application(s).
(Source: http://git.mozilla.org/?p=releases/gecko.git;a=blob_plain;f=dom/webidl/MozNDEFRecord.webidl;hb=refs/heads/v2.0)
[Constructor(octet tnf, optional Uint8Array type, 
optional Uint8Array id, optional Uint8Array payload)]
interface MozNDEFRecord
{
  /**
   * Type Name Field (3-bits) - Specifies the NDEF record type in general.
   *   tnf_empty: 0x00
   *   tnf_well_known: 0x01
   *   tnf_mime_media: 0x02
   *   tnf_absolute_uri: 0x03
   *   tnf_external type: 0x04
   *   tnf_unknown: 0x05
   *   tnf_unchanged: 0x06
   *   tnf_reserved: 0x07
   */
  [Constant]
  readonly attribute octet tnf;
 
  /**
   * type - Describes the content of the payload. This can be a mime type.
   */
  [Constant]
  readonly attribute Uint8Array? type;
 
  /**
   * id - Identifier is application dependent.
   */
  [Constant]
  readonly attribute Uint8Array? id;
 
  /**
   * payload - Binary data blob. The meaning of this field is application
   * dependent.
   */
  [Constant]
  readonly attribute Uint8Array? payload;
};
Note, in upcoming Firefox OS releases, we will be updating the data types slightly to make TNF an enum type instead of an octet.

Mozilla’s Flame device supports NFC, more devices coming

Our Flame device supports NFC and we are expecting more commercial devices from our partners soon. Flame device supports NFC chipset from NXP (PN547C2).

Videos

Here is a demo video of some of the NFC sharing features based on Firefox OS:



Core Apps In Flame device that use NFC:
  • Gallery
  • Video
  • Music
  • Settings
  • System browser

A sample 3rd party App

Here is an app that Mozillian Dietrich Ayala put together using the NFC tag reading API. BikeCommute is an app that registers an NFC tag to track bike commuters at the Mozilla Portland office. The app is running on a Nexus 4 with Firefox OS 2.2, and is built with Famo.us for UI and PouchDB for data storage and syncing to a remote CouchDB. Currently, the app just reads the user’s email address from a text record written to the tag.
The next version will add support for running the app on users’ phones, using a local contact (user) instead of a plain text record, and being able to configure the NFC tag from their own device. The plan is to develop leaderboards from the CouchDB data and Mozillians.org integration so we can deploy and compete with other offices and Mozillians everywhere! The source code is available on GitHub and pull requests welcome!

Here is a Video demo of this app in action:

More NFC documentation

So, there it is!
We are really excited to introduce this new addition to growing list of APIs and features in Firefox OS! We hope developers will take full advantage of all that NFC enables by way of device-to-device sharing and also services like contactless payment planned in future.

When can developers start using this API?

Currently this API is available for certified apps. We can’t wait to finish the work to make this API available for privileged apps, so all of you developers can take advantage of this. If you wish to follow along or jump in and help out, feel free to track Bug 1042851. We are targeting to finish the work for the next release v2.2.

Next in NFC

In upcoming releases, with the help of our partners, we are focusing on expanding the NFC coverage for supporting Secure elements and services like NFC based payments. More on that in a separate post later. Please stay tuned.
Here’s to the open web!

Tuesday, September 09, 2014

Enabling Voice Input into the Open Web and Firefox OS

Enabling Voice Input into the Open Web and Firefox OS

 

With the advent of smartphones triggered by iPhone in 2007, Touch became the primary mode of input for interacting with these devices. And now with the advent of wearables (and other hands-free technologies that existed before), Voice is becoming another key method of input. The possibilities of experiences Voice Input enables are huge, to say the least.
They go beyond just merely interacting with in-vehicle devices, accessories and wearables. Just think of the avenues Voice Input opens up for bringing more technologies to more people. It’s enormous: Accessibility, Literacy, Gaming, VR and the list goes on. There is a social vibe there that definitely resonates with our mission at Mozilla detailed in our Mozilla Manifesto

How it started

Both current leading mobile OS/ecosystem providers of today- Apple & Google have their native experiences with Siri and “OK Google” (coupled with Google Now). We really needed an effort to enable Voice Input into the first ecosystem that existed – the open Web. Around MWC 2013 in Barcelona, when Desigan Chinniah introduced me to André Natal – Firefox contributor from Brazil, we had a conversation around this and we instantly agreed to do something about this in whichever way possible. Andre told me about being inspired from a talk by Brendan Eich in BrazilJS, so I did not have much convincing to do. :-)

First steps

We had numerous calls and meetings over the past year on the approach and tactics around this. Since “code wins arguments”, the basic work started in parallel with Firefox desktop and FxOS Unagi devices, later switching to Mozilla Flame devices over time. Over a period of the past year, we had several meetings with Mozilla engineering leads on exact approach and decided to break this effort into several smaller phases (“baby steps”).
The first target was getting Web Speech API implemented, and getting acoustic/language modules integrated with a decoder and giving that a try. Lots of similar minded folks in Mozilla Engineering/QA & community helped along with guidance and code-reviews while Andre moonlighted (on top of his day job) with a very high focus. Things moved fast in past month or so. (Well, to be honest, the only day this effort slowed down was when Team Brazil lost to Germany in FIFA 2014. :-)) Full credit to André for his hard work!

Where are we?

Our current thinking is to get a grammar-based (limited commands) app working first and distribute it in our rich & diverse international Mozilla community for accent-based testing and enhancements. Once we have this stablilized, we will get into the phase 2 where we can focus more on natural language processing and get closer to a virtual assistant experience sometime in future that can give users voice based answers. There is lots of work to do there and we are just beginning.
I will save the rest of the details for later and jump to the current status this month. Where are we so far?
We now have the Web Speech API ready for testing and we have a couple demos for you to see!

Desktop: Firefox Nightly on Mac

1) http://youtu.be/1nSUvZlLMt8
2) http://youtu.be/R2PPz-O93X0
Editor’s note: for full effect, start playing the two above videos at the same time.

Firefox OS demo

http://youtu.be/65WmRw46-1U

So Come, Join us!
If you want to follow along, please look at the SpeechRTC – Speech enabling the open web wiki and Bug 1032964 – Enabling Voice input in Firefox OS.
So jump in and help out if you can. We need all of you (and your voices). Remember “Many Voices, One Mozilla”!

 

Thursday, January 16, 2014

My notes from CES 2014





My notes from CES 2014

I have been to CES a couple times in the past and I know it is a big show. But surprisingly, it keeps getting even bigger. After it broke record 130K attendees in past couple years, this year people expect it to be around ~150K when audits come in. What that means is every keynote worth attending takes at least a ~1.5 hour wait in line. Lines for lunch and coffee too. Even after my rigorous planning (demo/keynote timing, long walks between show floors etc.), I could only cover about half of my list. Nevertheless that was plenty to get a feel for what’s coming.
Here are the trends I saw and some thoughts at the end –

Wearables
Pebble Steele Smartwatch
This category was very hot this year - so much so that - there were almost no smartphone / tablet mentions this year in “best of CES” awards or even product announcements. Most of the new demo’s were about wearables. Some were cool like the Pebble Steel which definitely moves their smartwatch product from a geeky plasticky gadget-watch to a more mainstream/fashion accessory that regular people will feel comfortable wearing.  (Some other smartwatches were like wearing a big phone on your wrist. See Neptune Pine). Also, I saw several upgrades to current health / fitness bands, smart dog collars too.
LG announced LG LifeBand touch and heart-rate headphones. Intel was especially big on this trend. They announced a $500K funding competition “Make It Wearable” for startups/dev’s. Qualcomm already has announced their Toq smartwatch at Uplinq. It is pitched as a developer edition watch for now to push the category (and their BOM parts like mirasol display for it!).

Home Automation / Entertainment / Internet of Things


Intel based Baby Monitor
Chipset makers and OEMs like Qualcomm, Intel, Samsung, LG looked big on this, as they want to own this space - what with smartphones / tablets saturating. Qualcomm is also pushing for an open standard for software framework called AllJoyn in this space and had demos for several home devices “talking” with each other. Intel showed a market-ready baby monitor that has sensors for baby’s vitals and the alerts are sent to other devices at home (like mommy’s phone or even a coffee mug with a display). Intel also announced its Edison ultra-small computer that you can use to enable hobby-based home automation projects.


Intel Edison
Samsung announced their SmartHome service to control various appliances via Samsung phone, TV or Gear watch. Same with LG which allows to control appliances via SMS. Mantra here is: Buy our appliances and devices so you can control them all seamlessly. (Standardization, anyone?)

Sony’s multiple synchronizable (projected or mounted) displays demo (on wall, ceiling, coffee table) was impressive. You can use all of these displays separately or in a sync.


Sony "Throw" projector
Several use cases can be enabled by this. E.g. play “music to sunset” with all displays showing a relevant (different) imagery. Or play “street scene in Paris” in-sync on all displays. Impressively immersive, but this looked a bit expensive proposition for mass-market appeal but not too far away in future. Their futuristic interactive display projected on a coffee table (or a wall) that you can interact with (touch and move UI widgets) was pretty cool. 
interactive UI projected on a table
Wall display (mirror when off)


Ultra-HD TVs, 4K Content, TV/Social integration
Pixels have gone up to 4K and TVs can come in curved panels (switchable by user between flat v/s curved). They look really immersive. In the soccer match demo on Sony’s dual-4K TV, one could clearly see faces of players and stadium viewers. Also, with social integration, you can “watch TV with the world” with all the social feeds time-sync’ed with the content e.g. A missed goal in a soccer game and you can see a burst of reactions from twitter on a side panel, or in your wearable glasses.

Sony, Vizio, Samsung, LG are heavy on UHD TVs. Mostly planned in mid-2014. TV Sizes are going beyond 85-100inches now.
YouTube had 4K streaming (VP9) demos running at partner booths like Panasonic TVs. Netflix announced they will roll out the next season of “House of cards” in 4K. (I would have preferred the sequel of “Breaking Bad” in 4K, but I digress). And yes, Sony had (expensive) camcorders for 4K recording by consumers.

Car Automation / OS / Electronics

There was so much action around cars that CES felt like an auto expo as well. Intel has partnered with BMW i3 car integration. Qualcomm introduced an automotive chipset 602A (Quad core, 320 GPU, bunch of multimedia / gps features). 

Intel-BMW i3 integration


Google announced (with Audi, GM, Honda, Hyundai and NVidia) Android in car platform via a global alliance called Open Automotive Alliance (OAA). Get ready for Android homescreens in car (and new levels of tracking :-))
Samsung/BMW announced that Samsung Gear watch can control the car features now.

AR/VR

Chipset companies like Qc, Intel are pushing several AR/VR experiences like gaming & for kids education (e.g. QC & Sesame street partnership). For gaming, Intel has partnered with OEMs for kinect-like RealSense interactions on laptops etc. Played with it a bit, was pretty good at detecting palm/finger-level with a decently accurate depth sensing.

Some Thoughts on privacy

Privacy is soon slated to get out of control for consumers. So far they think they can turn off their one device (or features), but in future there will be smartwatch-like wearables and your car still up for tracking data about you. The expansion path of Smartphones/Tablets ->TVs -> Automotive (Cars) seems to be natural evolution for chipset, OEMs, OS and data mining companies. However, Combine that sort of tracking with data mining of content consumption habits (inputs to your brain) and social network comments (outputs of your brain), and you can almost roughly digitally clone someone’s brain.
Will consumer awareness itself get to a point for them to demand more control of their data, with open technologies like Firefox OS for all their devices? Or would it still be the OEM/carrier partners who want to break free of the OS-maker stranglehold on services and revenues?


Tuesday, March 26, 2013

Five Stages of a New Hire at Mozilla





Five Stages of a New Hire at Mozilla


1. Where (the heck) did I land?? These people are so different from me.

2. Let me try and listen, watch and observe. Ah, Actually these people believe in something that is different from what I have known.

3. Now, many things start to make sense. I observe my brain wiring is under revision.

4. A couple months pass by. Now after interacting with me, I note the new"er" hires are not sure of where they landed either.


5. I think I am a Mozillian now.





Thursday, July 05, 2012

Clouds v/s Device-2-Device Technologies

Forecast: Cloudy. In fact, A lot of different clouds. The "clouds" have been here for a few years already and some major players are offering services around both consumer and enterprise segments. Especially in mobile domain, iCloud and Google Drive have major ambitions. Amazon - the pioneer in many ways - is already strong with their compute engine (which Google now introduced as well). And there are many others (3rd party app developers included) who are building services on these clouds, or even building some other clouds.

How do these clouds help mobile users? Over the past few years, there is a proliferation of multiple mobile (or just portable) devices per person. It is not just 2 device issue (Desktop PC and a smartphone) any more. Many users have tablets, connected cameras, gaming consoles, storage devices, watches and some other wearables. Even in-home devices like connected thermostats will proliferate quickly. Clouds certainly have helped users reduce the perpetual "keep them all in sync" problem of the past. All your pictures, media, content is available from them all no matter what device you used to create it. Think pictures from your phone, tablet or camera. or your purchased media. But now the problem has shifted to "whose cloud" you have signed into. All Apple devices use iClouds, All Google devices will use Google Drive and so on. This is not an issue for you if all devices in your household are tied to a single ecosystem. (iOS or Android) but a big problem if they are not. Or if you do not want them to.

That's where Device to Device technologies can address the need. Proximity-based device connections can share content with each other "while you are around". The storage remains on the source device and the other device is merely "sharing" it. You are sitting in your living room and you can make your tablet interact with your TV or phone or set top box. No matter what OS they each are running. The media in this case will stay on your phone and the TV is merely offering its large display to watch it. There are a few options available here: You can use one of the proximity based RF bearers like WiFi Direct (now that WiDi future is bleak) or Bluetooth. And make your apps smart enough to handle these bearers. or All of them come together and form a standard to share based on proximity. Qualcomm has tried to make this interaction easy with their AllJoyn protocol (which is open source) so essentially any OEM or app developer can implement and support this. There may be more initiatives coming here.

Thus the question is: Are these 2 two technologies (clouds vs device-2-device) fundamentally going in different directions for any single ecosystem to consider? Is there a way to use them both to complement each other?  

Wednesday, January 19, 2011

CES 2011 - A brief summary of innovations

Attendees: 140,000(international attendees 30000)
Companies: 2,500
Approx products displayed: 20,000+

Quite a few new technologies were on display at the CES and most exciting categories were in mobile computing. Here is a very short summary of the products on display and the technologies of interest that are powering them.

Tablets/SmartPhones:
This was definitely the year of the tablet. Over 80 tablets announced and launched at the CES. And Motorola not only take home the best category award for Motorola Xoom and also was the “best in show” product. Same with Motorola Atrix 4G - won the best smartphone. The overall tablet trend in the show was large screens, customized OS & high speed connectivity. Some had innovative form factors notably the Dell tablet that can swivel and becomes a touch-tablet or a keyboard-laptop Inspiron Duo. Or the Samsung tablet with slideout keyboard. Some tablets have 4G/LTE planned at launch. Toshiba had their tablet on display in a glass showcase so none was allowed to touch it – possibly some new functions on the way. The already out Samsung Galaxy also had a lot of buzz in 7” category, with many media-apps for their SmartTVs running on it. It also has 4G and wi-fi versions now. There were also quite a few low cost tablet options mainly from companies in China & Taiwan running various flavors of Android. However, the major software differentiator here was who is running Honeycomb version of Google made for tablets?  Answer: Motorola, LG & Toshiba. Some tablets shown will run the Microsoft Corp's Windows 7 PC software (Samsung slideout) and RIM showed off their business-focused 4G PlayBook tablet. Other tablets were from ASUS, Acer, MSI and many more.

LTE Chipsets: LG announced that they have been working with VZW on LTE since 2008 and showcased a few of their LTE offerings. Wireless module WM300 based on L2000 chipset – identified as world’s 1st LTE chipset. There were quite a few USB LTE modems from Sierra and Pantech as well. For now, only plug-in laptop modems can take advantage LTE, but at the show, Verizon showed off smart phones from Motorola (Bionic), LG, HTC and Samsung smartphones that use LTE.
   
Windows on mobile chipsets: Microsoft announced that they will support Windows on ARM and NVIDIA.  So expect a longer battery life from windows.  To counter the tablet threats, Intel and AMD are putting graphics capabilities in the GPU for faster performance on games etc
Connected Appliances:  
Everything in home appliances appears now to have a net connection and a display. The fridge that can do energy management for the entire home. Can offer your recipes of choice, can show you weather and other stats like how many times you opened the fridge door (energy mgmt tips). There were also washers and dryers that can tell you to delay the cycle till the smart grid rates drop to minimum and can show you many tips pulled from the clouds. All clouds. The issue here may be all appliance companies, utility companies, TV/cable companies want to do that. One or two distribution/pricing/GTM strategies will eventually need to emerge.

3D TVs/3D displays/3D LCDs/Mobile 3D:
This was anticipated and 3D TVs amassed a lot of attention at the show. The quality has improved multiple times e.g. LG’s no flicker 3D. They also showcased 3D without glasses. This technology is making inroads outside TVs now and we could see 3D in laptop and mobile displays. There is a lot of speculation about if 3D will really take off but the 3D technology without glasses shows some promise based on the response there. The bulky, expensive, battery-powered glasses are also paving way for light, inexpensive glasses.
  
Convergence/Media Everywhere: A few companies are trying to solve the problem of viewing user’s owned media from any of their owned devices. Qualcomm’s Skifta showcased this based on their Skifta app (similar to DLNA). Motorola showcased Medios and mover solutions. The solutions also target cloud based contents like netflix. Major push was seen on TV makers trying to solve that problem as well.

TV based video-calling: TV/chipset makers in association with VOIP companies showcased this technology in action. Notably Sony was demo’ing Skype video calling on their TVs and Intel demo’ed Cisco Umi running their chipsets. Services to be ordered from these respective providers (in this case, Skype and Umi). Question-How many services/bills can customers handle? Expect some service consolidation here…
SmartTVs: Samsung used their keynote to show this product through a story setting interspersed during their CEO address. High quality TV that can pull the contents from all internet providers, has widgets and apps.  Also, saw a few more SmartTVs from TCL that support Kinect like interface for (or instead of) their remote. User can sit in front of the TV and with wave of hands can scroll, select, push, pause – everything that you can do with a remote. User would need to get used to some new gestures.

Technologies to reduce “Driver Distraction”: Hyundai showcased a technology that uses camera/sensors based “obstacle detections” for alerting a distracted driver. The demo was as you are “distracted” (call/sms), the sensors detect a vehicle in front, car brakes by itself or tightens your seatbelts w/ an audible alert.

Tuesday, January 11, 2011

Form Factor Trends


How many screens do we need in our lives? SmartPhone, TV, Laptop, Tablet and ..?

 

Early Jan 2011, I had an opportunity to attend an awesome panel discussion at CES about “Gadgets Everywhere and the Role of Wireless”. The discussion brought forth some interesting predictions of future that got me thinking. The panel seemed to agree that the tablet is the 4th and “final” screen in user’s lives. Well, that “final” screen sounds like a familiar phrase as experts indeed had called mobile (or smartphone) as the final screen few years ago. That leaves us with a question: really how many screens do users need?

For the past few months, I have been using all the 4 screens in my life. Let’s for a moment, leave the TV screen aside - as an inevitable screen for every family to watch and enjoy contents together. That leaves us with 3 “personal” screens. So do we need them all?

Recently, when I started using the tablet, I had thought I will use the other 2 devices a bit “less”, paving the way for zeroing down on the best 2 of 3 that fulfilled all my needs. However, interestingly I did find a place for all 3 personal screens. and boy, did they fit their place each so well! I started using the tablet for a few applications where the laptop and smartphone were “less efficient” or simply “inadequate”. For example, the tablet was still as portable as the Smatphone but made the content-watching much better, with their bigger and better screens. The ready-to-consume contents viz. Books, Videos/movies and Pics play and look much better on the tablets. And unlike the heavier laptop, it did make me free from “desk” and entered the other rooms of my home. You could hold it like a “bedside book” for reading. You could make it stand for playing a movie. But the most amazing thing with the touch interface is that it almost minimizes the learning curve for non-computer-literates like my 3year old son or my parents. All they had to do to watch a content was to simply touch what they want to play. That for me is cool. Now the “Intent” takes over “methods”. This is how it should always have been. No need to read user manual or need any hand holding. Someone recently said at a conference: “The user manual is merely a list of design failures”. So true! If a consumer facing product’s interface is intuitive, it should just unbox and be ready to serve.

So are tablets already to take over laptops or smartphones? Not so fast. Like many other people, I am not a tablet-typer. So it is tough to create any docs or contents with tablets. Nor do they have a good MS-Office or productivity app suite for my office work. (It may change once the Googles or Apples of the world will solve that or if MS Windows tablets take off again). So yes, tablets are here to stay but not ready to kick the laptops out yet. There is a ton of opportunity to make the tablet more usable and enjoyable with software solutions (apps). That’s where the innovation will continue happening. With its portability combined with HW components like GPS and other HW components providing user context, the possibilities are endless.

There is a prediction that the tablets will become the only “away from home” consumption device. I agree. TV will remain unchallenged at home. Laptops will be around for a while at work and work-from-home. I especially like the idea that the tablet will be the device to enjoy while “leaning back” (content consumption) and laptop is for “leaning forward” (content creation). To take this idea further, TV will be a device to enjoy with family while relaxing “feet up” and the smartphone is a device to use while on the go (“feet on”). Unless we stop getting in some of these positions, all these screens are here to stay. So yes, my guess is 4 is a maximum. Only way from here is to go down to 3 or may be a 2 with some more evolutions of HW/SW. What’s your take?