iTranslated by AI
There Are 3x More Interfaces You Can Build Than You Think / OSS Summit 2025 Report
Open Source Summit 2025
Hello, I'm katayama8000, a software engineer at Nulab Inc..
Open Source Summit Japan 2025 was held in Tokyo from December 8–12, 2025, and I attended!
The reason I participated wasn't that I had a specific talk I wanted to hear, but rather because I was feeling a bit hazy lately, wondering if I could continue as a software engineer given the rise of AI agents. I'm sure there are more than a few people who feel the same way. I joined this event hoping to find some clarity.
I realized after deciding to join that the registration fee was quite high at 375.00 USD. I'm grateful to my company for covering the cost and to my team for continuing development while I was away.
The Interfaces You Can Provide Are 3x More Than You Think
To all of you in software development: what kind of interfaces have you provided so far?
For most of you, it's probably web applications or mobile applications, right? Me too.
However, the interfaces we can provide with the skill sets we already have are more numerous than we might think. Based on what I heard at the OSS Summit and other sources, let me introduce a few.
Car Navigation Systems (Advancement of Software Defined Vehicles)
At the OSS Summit, sponsor companies had booths showing various demos. While some booths were staffed only by people from overseas, the car manufacturer booths were mostly Japanese. As mentioned in the lectures, there's a term in the hardware industry called SDV (Software Defined Vehicle), meaning vehicle functions are increasingly being defined by software.
At the booth where I spoke, they had taken a car navigation system—which was originally tightly coupled with hardware—appropriately modularized it, and made it so that the applications inside the navigation system could run on Android. In other words, it had become possible to develop car navigation apps using only the technologies we software engineers are already familiar with. Furthermore, they had developed an emulator for use during development, so there's no need to build and connect to hardware every single time just to check the operation.
When I was previously involved in hardware-related development, there were no emulators or debuggers; I'd change a bit of code and then wait about 10 minutes for the build. Since I couldn't see the data directly, I had to identify problems by capturing packets with Wireshark and looking at logs. Sometimes I'd spend all day just looking at logs, so having an environment like this is incredibly helpful for developers. Since reasons for things not working often turned out to be poor contact between hardware components, being able to complete development entirely within a PC is a huge lifesaver.
Furthermore, there seems to be a mechanism similar to the iOS App Store within car navigation systems, which will reportedly allow for easy publishing and installation of apps. However, since you can't really operate apps while driving, it might still be a while before car-mounted apps become truly popular. As autonomous driving technology advances and we can use our time more freely inside the car, the demand for in-car apps may increase.
As a side note, I was talking with the person at that booth about what kind of apps we'd want to use while in a car.
Ideas like a car version of "Pokémon GO," a growth game that evolves based on mileage, and apps linked with car insurance came up.
Thinking about what's unique to cars, apps using speed, mileage, and location information seem plausible. While some apps might cause a backlash if developed, I felt that things that were previously impossible due to man-hours, budget, or technical constraints are becoming possible through technological advancements (like SDV and AI agents).
meta-flutter: Penetration into Embedded Systems
Flutter is famous as a framework for developing mobile applications, but there is actually a project to make Flutter usable for embedded systems as well. That is meta-flutter.
In the talk, there was a demo of a simple calculator app running on Linux. In the AGL (Automotive Grade Linux) environment, it seems that vehicle APIs can also be called from Flutter. So, for example, it's possible to get the status of the left door from a Flutter app and reflect it in the UI.
The development environment is also quite well-developed, allowing for development on Mac, Windows, and Linux, with integration with IDEs (VS Code), emulators, and CI/CD support.
What left an impression on me in this talk was how much interest Japanese car manufacturers showed in this technology. In my impression, the automotive industry had a higher hurdle for adopting new technologies compared to the web industry. Speaking from my own experience in the automotive industry, we used Tailwind CSS up until the demo, but once actual development started, we were told it couldn't be used. Use via npm and CDN was prohibited, though placing built CSS files in the project was allowed, so we handled it that way. From that experience, I had the impression that they were very cautious about introducing packages.
According to a survey by the Linux Foundation, for the item "A lack of clear policy or supporting training and guidance on how to use OSS" under "Barriers limiting OSS adoption in organizations," Japan stands at 51% while countries other than Japan are at 28%. Looking at these numbers, it can be inferred that some companies decide not to use OSS for the time being because they don't know the criteria for whether it's okay to use it.
Since OSS is indispensable for current development, I believe that becoming more positive toward the use of (or contribution to) OSS will also increase development efficiency. I hope that the attitude of actively trying out new technologies like meta-flutter will spread among Japanese car manufacturers as well.
Fire TV / Amazon Echo: React Native Runs at the OS Level
This isn't directly related to the OSS Summit, but I learned about it at a meetup I attended while in Tokyo, where an active Amazon engineer gave a presentation.
Amazon is developing something called Vega OS, which supports React Native at the OS level. In other words, it's "React Native Native." Packages commonly used in React Native are built into the OS side, meaning the application layer can be kept very thin.
As I've mentioned repeatedly above, it might seem obvious, but since emulators and VS Code plugins are provided, it looks like simple apps can be developed quickly.
I never imagined that I could develop interfaces for TVs or smart speakers using only the skills I already possess.
Moreover, this Vega OS is reportedly faster than the previous Android-based OS... I wondered how that could be. While Android isn't exclusively a smartphone OS, it apparently has features specifically designed for smartphones, which led to this result. Since Google develops Android, Amazon might have also wanted to eliminate that dependency.
As a side note, regarding React Native, applications within Meta Quest (the VR headset provided by Meta) can now be developed using React Native and its framework, Expo.
Summary
I've looked back at the OSS Summit from a technical perspective. Over the three days, I listened to many talks and was able to experience technologies that I don't usually encounter in my daily work. What remains most memorable, alongside the content of the talks, were the conversations with other participants. Although translation tools are available, the entire event is in English, which is admittedly exhausting. At those times, I would talk to people at the booths or other attendees, and we had various conversations.
I had lively discussions about Rust with fellow Rust enthusiasts, talked with colleagues about other teams and things we don't usually get to discuss, discovered that a former colleague of someone at a booth now works at my company, and heard from someone at an automotive company about what makes Tesla so impressive... I can't write them all down, but I had so many different conversations. I brought a business card holder packed to the brim, and by the third day, they were all gone.
The "hazy feeling" I mentioned in my motivation for participating—wondering if I should continue as a software engineer—has cleared up a bit. Partly because my motivation increased by talking with people who have a high interest in OSS and technology, and partly because experiencing technologies I don't touch in my daily work expanded the breadth of my interests. I felt that environments are now being created where it's easier to challenge things that might have seemed too difficult without AI, and it’s really about those who take action coming out on top.
Since participating, I've been enjoying myself by buying books on Linux and purchasing hardware to try connecting it with software.
I believe an engineer's job isn't just about writing code; it's about solving the world's problems. I think it's also very important to provide the most suitable method for problem-solving. Engineers who are feeling "hazy" like I was might find some new insights by participating in the next OSS Summit.
Discussion