Unveiling the World of AR with ARCore: My Journey of Exploration [Practical Guide]

Shubham Kumar Gupta
Stackademic
Published in
13 min readJun 24, 2023

--

Hey there! I’ve recently embarked on an exciting adventure into the realm of augmented reality (AR) using the powerful ARCore SDK provided by Google. In this blog, I can’t wait to share my experiences, challenges, and triumphs as I delved into various AR capabilities like plane detection, model placing, depth analysis, face detection, surface video recording, camera switching, and more. Join me on this captivating journey as I unveil the captivating world of AR and how it transformed my perspective on reality.

What‘s AR?

AR stands for Augmented Reality. It refers to a technology that combines the “real world” with “virtual” elements, enhancing a user’s perception and interaction with the environment.
To build AR applications in Android, several libraries and frameworks are available that we can use such as

  • ARCore: It’s Google’s AR platform for Android and provides the core functionalities required for AR, such as motion tracking, environmental understanding, and light estimation.
  • OpenGL ES: Open Graphics Library for Embedded System, it’s a widely adopted graphics library for rendering 2D and 3D graphics on mobile devices.
  • Sceneform: It’s a high-level AR library built on top of ARCore, provided by Google. It simplifies the process of rendering 3D objects and scenes in AR by providing an intuitive API and tools.
  • Vuforia: It‘s an AR platform that offers computer vision capabilities, object recognition, and tracking features.

Some more are available such as Wikitude, Rajawali, etc

What is ARCore and SceneForm?

ARCore is a software development kit (SDK) developed by Google that allows developers to build AR applications for Android devices. It provides tools for motion tracking, environmental understanding, and light estimation, which are essential for creating immersive AR experiences. SceneForm is a 3D framework that works with ARCore to make it easier for developers to create AR applications without having to learn complex 3D graphics programming.

Why ARCore?

In short, the reasons may include :
- Core AR backed by Google!
- Device Compatibility
- Performance
- Integration with Android Ecosystem
- Community Support

So, Let’s start building the AR!

Let’s divide these into sections -

Section 1: Basic Initialization and Setup

  • Exploring the initial steps of setting up the ARCore SDK
  • Understanding the necessary configurations and dependencies
  • Overcoming common challenges in the initialization process

Section 2: Model Initialization

  • Unveiling the process of preparing and importing 3D models
  • Handling model loading, scaling, and positioning in the AR scene

Section 3: View Setup & Model Placements

  • Implementing the AR views for both front and back cameras
  • Developing the Back AR Fragment: Techniques and Considerations
  • Developing the Front AR Fragment: Challenges and Solutions
  • Solving issues related to camera switching

Section 4: Capturing Snaps or Recording the Screen!

  • Expanding the AR experience with snap capturing and screen recording

Section 1: Basic Initialization and Setup

— We can start building by creating a new empty Android Application

  • Let’s begin with AndroidManifest.xml
    - Permissions needed
    <uses-permission android:name="android.permission.CAMERA" />
<!-- Limits app visibility in the Google Play Store to ARCore supported devices
(https://developers.google.com/ar/devices). -->
<uses-feature
android:name="android.hardware.camera.ar"
android:required="false" /> <!-- Optional as if you want that our app should work with/without ARcore functionality-->
<uses-feature
android:glEsVersion="0x00020000"
android:required="false" /> <!-- Optional as if you want that our app should work with/without ARcore functionality -->
<uses-feature
android:name="android.hardware.camera"
android:required="false" />

<uses-feature
android:name="android.hardware.camera.autofocus"
android:required="false" />

-Metadata needed

<meta-data
android:name="com.google.ar.core"
android:value="optional" /> <!-- Optional as if you want that our app should work with/without ARcore functionality -->
  • Next jump into “build.gradle”
    implementation 'com.google.ar:core:1.37.0'
implementation 'com.google.ar.sceneform.ux:sceneform-ux:1.17.1'
implementation 'com.google.ar.sceneform:assets:1.17.1'

Section 2: Model Initialization

The challenge I faced:

Which model type to choose, unable to render many 3d models, getting issues like texture not found, finally found out that the best-supported files are sfa, sfb, glb, and gltf for ARCore android.

Let’s start now!

Let's download our “model.glb” file from “sketchfab.com” or any website you prefer. We can place our model inside the assets folder or we can download and store the files inside any directory.

Here, in this code segment constructs a ModelRenderable by specifying the source, scale, and recentering mode of the 3D model. The ModelRenderable can then be used to render the model in an AR scene, creating immersive and interactive augmented reality experiences.

val model = ModelRenderable.builder()
.setSource(
requireContext(),
RenderableSource.builder()
.setSource(
requireContext(),
Uri.parse(target.absolutePath),
// Uri.parse("model.glb") here model.glb is stored inside assets folder
RenderableSource.SourceType.GLB
)
.setScale(1.5f)
.setRecenterMode(RenderableSource.RecenterMode.ROOT)
.build()
)
.build()

Section 3: View Setup

  • i) Developing Back AR Fragment

The challenge I faced:

One of the challenges I encountered while working was properly anchoring the model and effectively handling scaling, rotation, plane detection, and accurate placement of the model in the augmented reality scene. These

In the below code section, this code demonstrates a custom implementation of the “ArFragment” class called BackArFragment. Here, we tried overriding the getSessionConfiguration function so we can modify it according to our needs like we need autoFocusing, how we need the model to be placed, etc. This customization allows developers to tailor the AR experience according to their specific requirements.

class BackArFragment : ArFragment() {

override fun getSessionConfiguration(session: Session): Config {
val config = Config(session)
config.updateMode = Config.UpdateMode.LATEST_CAMERA_IMAGE
config.focusMode = Config.FocusMode.AUTO
config.depthMode = Config.DepthMode.DISABLED
config.instantPlacementMode = Config.InstantPlacementMode.DISABLED

session.configure(config)
arSceneView.setupSession(session)
return config
}

override fun onCreateView(
inflater: LayoutInflater,
container: ViewGroup?,
savedInstanceState: Bundle?,
): View? {
val frameLayout =
super.onCreateView(inflater, container, savedInstanceState) as FrameLayout?
planeDiscoveryController.hide()
planeDiscoveryController.setInstructionView(null)
return frameLayout
}

}

Let’s build our main_ar_fragment.xml file

<androidx.constraintlayout.widget.ConstraintLayout
android:layout_width="match_parent"
android:layout_height="match_parent">

<fragment
android:id="@+id/arSceneformBack"
android:name="com.my.application.BackARFragment"
android:layout_width="match_parent"
android:layout_height="match_parent"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toTopOf="parent" />

</androidx.constraintlayout.widget.ConstraintLayout>

Now this is how our “MainARFragment.kt” would look like,

The first step is to initialize our “BackArFragment” object, which sets up the camera with the desired options defined. This ensures that the camera operates according to our specifications. Next, we move on to loading the assets into the AR scene.

There are various types of nodes available for loading assets, such as Node, TransformableNode, SkeletonNode, AugmentedFaceNode, and more. In our case, we opt to load the model as a TransformableNode to take advantage of its built-in controls for rotation and zooming. This choice allows us to have greater flexibility and interactivity with the loaded model within the AR environment.

val backArFragment =
(childFragmentManager.findFragmentById(R.id.arSceneformBack) as BackArFragment)

//Back
backArFragment?.setOnTapArPlaneListener { hitResult: HitResult, plane: Plane, motionEvent: MotionEvent ->
backArFragment?.let {
loadViaAssets(it, hitResult)
}
}


private fun loadViaAssets(
arFragment: BackArFragment,
hitResult: HitResult,
) {
val anchor = hitResult.createAnchor()
model?.thenAccept { modelRenderable ->
addModelToScene(arFragment, anchor, modelRenderable)
}
?.exceptionally {
val builder = AlertDialog.Builder(requireContext())
builder.setMessage(it.message).show()
return@exceptionally null
}
}

private fun addModelToScene(
arFragment: BackARFragment,
anchor: Anchor?,
modelRenderable: ModelRenderable?,
) {
val anchorNode = anchor?.let { AnchorNode(it) }
val transformableNode =
TransformableNode(arFragment.transformationSystem)

transformableNode.scaleController?.maxScale = 1.5f
transformableNode.scaleController?.minScale = 0.2f
transformableNode.localPosition = Vector3(0f, 0f, -2f)
transformableNode.renderable = modelRenderable
transformableNode.setParent(anchorNode)
transformableNode.rotationController?.isEnabled = true
arFragment.arSceneView?.scene?.addChild(anchorNode)
transformableNode.select()
}
  • ii) Developing Front AR Fragment

The challenge I faced:

One of the challenges I encountered during my development journey was the difficulty of placing the model accurately over specific areas of the user’s body, such as the neck or top of the head. While ARCore provides powerful tools for tracking and placing objects in the real world, achieving precise alignment with specific body parts can be challenging.

In the below-referenced code, we override the getSessionFeature method which we used to switch the camera to the front, and the getSessionConfiguration method where we needed details like faceMeshing compatibility

class FaceArFrontFragment : ArFragment() {

override fun getSessionConfiguration(session: Session): Config {
val config = Config(session)
config.updateMode = Config.UpdateMode.LATEST_CAMERA_IMAGE
config.focusMode = Config.FocusMode.AUTO
config.depthMode = Config.DepthMode.DISABLED
config.instantPlacementMode = Config.InstantPlacementMode.LOCAL_Y_UP
config.instantPlacementMode = Config.InstantPlacementMode.DISABLED
config.augmentedFaceMode = Config.AugmentedFaceMode.MESH3D

session.configure(config)
this.session = session
arSceneView.setupSession(session)
return config
}


// Use below to switch camera
override fun getSessionFeatures(): Set<Session.Feature> {
return EnumSet.of(Session.Feature.FRONT_CAMERA)
}

override fun onCreateView(
inflater: LayoutInflater,
container: ViewGroup?,
savedInstanceState: Bundle?,
): View? {
val frameLayout =
super.onCreateView(inflater, container, savedInstanceState) as FrameLayout?
planeDiscoveryController.hide()
planeDiscoveryController.setInstructionView(null)
return frameLayout
}
}

Our main_ar_fragment.xml would look like

<androidx.constraintlayout.widget.ConstraintLayout
android:layout_width="match_parent"
android:layout_height="match_parent">

<fragment
android:id="@+id/arSceneformFront"
android:name="com.my.application.FaceArFrontFragment"
android:layout_width="match_parent"
android:layout_height="match_parent"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toTopOf="parent" />

</androidx.constraintlayout.widget.ConstraintLayout>

We can initialize the frontARFragment like the below-mentioned code.
We tried initiating frontARFragment, then we tried creating our model, and we also tried creating a texture that we can place over.

val faceArFrontFragment =
(childFragmentManager.findFragmentById(R.id.arSceneformFront) as FaceArFrontFragment)

faceArFrontFragment?.arSceneView?.cameraStreamRenderPriority =
Renderable.RENDER_PRIORITY_FIRST

var faceNodeMap = HashMap<AugmentedFace, AugmentedFaceNode>()

val faceModel = ModelRenderable.builder()
.setSource(
requireContext(),
RenderableSource.builder()
.setSource(
requireContext(),
Uri.parse(target.absolutePath), // Uri.parse("model.glb") model.glb stored inside assets folder
RenderableSource.SourceType.GLB
)
.setScale(0.5f)
.setRecenterMode(RenderableSource.RecenterMode.ROOT)
.build()
)
.build()

faceModel?.thenAccept { modelRenderable ->
modelRenderable.isShadowCaster = false
modelRenderable.isShadowReceiver = false
}
val faceMeshTexture: Texture? = null

Texture.builder()
.setSource(requireContext(), R.drawable.makeup)
.build()
.thenAccept { texture -> faceMeshTexture = texture }
  • Now, Let's try searching for the face, and Let’s place our model there!
//FACETIME
val scene = faceArFrontFragment?.arSceneView?.scene
val session = faceArFrontFragment?.arSceneView?.session

scene?.addOnUpdateListener {
if (viewModel.model != null) {

val allDetectedFaces = session
?.getAllTrackables(AugmentedFace::class.java)

allDetectedFaces?.let {
for (face in it) {
//found a new face

if (!faceNodeMap.containsKey(face)) {
val faceNode = AugmentedFaceNode(face)
faceNode.setParent(scene)

// We can place our 3d model at any position we need

// faceModel?.thenAccept {
// val badge = Node()
// val localPosition = Vector3()
// localPosition.set(0.0f, -0.3f, 0.0f)
// badge.localScale = Vector3(0.1f, 0.1f, 0.1f)
// badge.localPosition = localPosition
// badge.setParent(faceNode)
// badge.renderable = it
// }

// We can place any activity_layout.xml at any position we need using ViewRenderable

ViewRenderable.builder().setView(requireContext(), R.layout.ar_layout).build()
.thenAccept {
val obj = Node()
val localPosition = Vector3()
localPosition.set(0.0f, -0.4f, 0.0f)
obj.localPosition = localPosition
obj.setParent(faceNode)
obj.renderable = it
}
faceNode.faceMeshTexture = faceMeshTexture
faceNodeMap[face] = faceNode
}
}

// Remove any AugmentedFaceNodes associated with an AugmentedFace that stopped tracking.
val iter = viewModel.faceNodeMap.entries.iterator()
while (iter.hasNext()) {
val entry = iter.next()
val face = entry.key
if (face.trackingState == TrackingState.STOPPED) {
val faceNode = entry.value
faceNode.setParent(null)
iter.remove()
}
}
}
}
}
  • iii) Switching Cameras!!

The challenge I faced:

One of the challenges I encountered during my development journey was the difficulty of switching the camera. Most of the time error was due to using both cameras at once, or issues related to sessions.

While working with A, I was able to develop both front and Back separately but I was facing difficulty with how to switch cameras between and work on these simultaneously.

So, I thought about the factory design pattern, and fragment Transaction method, where we can create a factory class, and on the basis of the parameter it will return a product!

So let’s dive into that

class CustomARCameraFragment : ArFragment() {

override fun getSessionConfiguration(session: Session): Config {
val config = Config(session)
if (arguments?.getBoolean(IS_FRONT) == true) {
config.updateMode = Config.UpdateMode.LATEST_CAMERA_IMAGE
config.focusMode = Config.FocusMode.AUTO
config.depthMode = Config.DepthMode.DISABLED
config.instantPlacementMode = Config.InstantPlacementMode.LOCAL_Y_UP
config.instantPlacementMode = Config.InstantPlacementMode.DISABLED
config.augmentedFaceMode = Config.AugmentedFaceMode.MESH3D
} else {
config.updateMode = Config.UpdateMode.LATEST_CAMERA_IMAGE
config.focusMode = Config.FocusMode.AUTO
config.depthMode = Config.DepthMode.DISABLED
config.instantPlacementMode = Config.InstantPlacementMode.DISABLED
}
session.configure(config)
arSceneView.setupSession(session)
return config
}

override fun getSessionFeatures(): Set<Session.Feature> {
return if (arguments?.getBoolean(IS_FRONT) == true) {
EnumSet.of(Session.Feature.FRONT_CAMERA)
} else {
EnumSet.of(Session.Feature.SHARED_CAMERA)
}
}


override fun onCreateView(
inflater: LayoutInflater,
container: ViewGroup?,
savedInstanceState: Bundle?,
): View? {
// Inflate the layout for this fragment
val frameLayout =
super.onCreateView(inflater, container, savedInstanceState) as FrameLayout?
planeDiscoveryController.hide()
planeDiscoveryController.setInstructionView(null)
return frameLayout
}

companion object {
@JvmStatic
fun newInstance(isFront: Boolean) =
CustomARCameraFragment().apply {
arguments = Bundle().apply {
putBoolean(IS_FRONT, isFront)
}
}
}
}

Here’s how the “main_ar_fragment.xml” would look like

<androidx.fragment.app.FragmentContainerView
android:id="@+id/fragmentContainer"
android:layout_width="match_parent"
android:layout_height="match_parent" />

Here’s how our “MainARFragment.kt” would look like
the initCameraARFragment function initializes the AR camera fragment based on the provided parameter indicating whether the front or back camera should be used. The switchCamera function toggles the camera between the front and back by updating the isFront flag and reinitializing the AR camera fragment accordingly. This functionality allows for seamless switching between the front and back cameras during the AR experience.

private fun initCameraARFragment(isFront: Boolean) {
lifecycleScope.launch(Dispatchers.Main) {
cameraSwitchArFragment = CustomARCameraFragment.newInstance(isFront)
cameraSwitchArFragment?.let { fragment ->
childFragmentManager.beginTransaction()
.replace(R.id.fragmentContainer, fragment)
.commit()
}
cameraSwitchArFragment?.arSceneView?.cameraStreamRenderPriority =
Renderable.RENDER_PRIORITY_FIRST
}
}

private fun switchCamera(){
isFront = !isFront
initCameraARFragment(isFront)
}

Section 4: Capturing Snaps or Recording the Screen!

The challenge I faced:

One of the challenges I encountered was unable to capture the surface view properly which captured both the view as well the controls, even after capturing the screen the color offset was totally different like a lot of blue, or green. The same goes for recording the screen as it recorded the controls.

Hey, Let's dive into capturing/recording the surface view
The CaptureHelper class encapsulates functions for capturing photos and recording videos in an AR scene. It provides functionality to take a photo of the scene, save it as an image, and toggle the recording state to start or stop video recording. It also handles the configuration and setup of the media recorder, video properties, and surface mirroring for video recording.

Let's divide this class into three segment
- CaptureScreen as takePhoto
- Save the Bitmap
- Record the SurfaceView and store it using MediaRecorder

  1. CaptureScreen as takePhoto function:
  • This function captures a photo of the AR scene by creating a bitmap with the size of the provided arSceneView.
  • It then uses PixelCopy.request to copy the pixel data from the arSceneView to the bitmap.
  • If the pixel copy is successful, the resulting bitmap is passed to the saveMediaToStorage function to save it to the provided file path.
class CaptureHelper {
val TAG = "CaptureHelper"
fun takePhoto(
arSceneView: SurfaceView,
filePath: File,
) {
// Create a bitmap the size of the scene view.
val bitmap = Bitmap.createBitmap(
arSceneView.width, arSceneView.height,
Bitmap.Config.ARGB_8888
)

// Create a handler thread to offload the processing of the image.
val handlerThread = HandlerThread("PixelCopier")
handlerThread.start()
// Make the request to copy.
CoroutineScope(Dispatchers.IO).launch {

}
PixelCopy.request(arSceneView, bitmap, { copyResult ->
if (copyResult === PixelCopy.SUCCESS) {
saveMediaToStorage(bitmap, filePath)

}
handlerThread.quitSafely()
}, Handler(handlerThread.looper))

}
}

2. Save the Bitmap :

  • This function saves the provided bitmap as an image to the specified file directory.
  • It creates an output stream to the file and uses it to compress and write the bitmap data as a JPEG image.
class CaptureHelper {
// this method saves the image to gallery
fun saveMediaToStorage(
bitmap: Bitmap,
fileDir: File,
) {
// Output stream
val fos: OutputStream = FileOutputStream(fileDir)

fos.use {
// Finally writing the bitmap to the output stream that we opened
bitmap.compress(Bitmap.CompressFormat.JPEG, 100, it)
}
}
}

3. Record the SurfaceView and store it using MediaRecorder

  • Functions like buildFilename, stopRecordingVideo, setUpMediaRecorder, setVideoSize, setVideoQuality, setVideoCodec, onToggleRecord, startRecordingVideo, setBitRate, setFrameRate, setSceneView, isRecording, and getVideoPath are provided to support video recording functionality.
  • These functions handle the setup, configuration, and management of the media recorder, video properties, recording state, and surface mirroring.
class CaptureHelper {

private val DEFAULT_BITRATE = 10000000
private val DEFAULT_FRAMERATE = 30
private var recordingVideoFlag: Boolean? = false
private var mediaRecorder: MediaRecorder? = null
private var videoSize: Size? = null
private var sceneView: SceneView? = null
private var videoCodec: Int? = null
private var videoPath: File? = null
private var encoderSurface: Surface? = null
private var bitRate = DEFAULT_BITRATE
private var frameRate = DEFAULT_FRAMERATE

private val FALLBACK_QUALITY_LEVELS = arrayOf(
CamcorderProfile.QUALITY_HIGH,
CamcorderProfile.QUALITY_2160P,
CamcorderProfile.QUALITY_1080P,
CamcorderProfile.QUALITY_720P,
CamcorderProfile.QUALITY_480P
)

private fun buildFilename(fileDir: File): String? {
videoPath = fileDir

val dir = videoPath?.parentFile
if (dir?.exists() != true) {
dir?.mkdirs()
}
return videoPath?.absolutePath
}

private fun stopRecordingVideo() {
// UI
recordingVideoFlag = false
if (encoderSurface != null) {
sceneView?.stopMirroringToSurface(encoderSurface)
encoderSurface = null
}
// Stop recording
mediaRecorder?.stop()
mediaRecorder?.reset()
}

@Throws(IOException::class)
private fun setUpMediaRecorder() {
mediaRecorder?.setVideoSource(MediaRecorder.VideoSource.SURFACE)
mediaRecorder?.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4)
mediaRecorder?.setOutputFile(videoPath?.absolutePath)
mediaRecorder?.setVideoEncodingBitRate(bitRate)
mediaRecorder?.setVideoFrameRate(frameRate)
mediaRecorder?.setVideoSize(videoSize?.width!!, videoSize?.height!!)
videoCodec?.let {
mediaRecorder?.setVideoEncoder(it)
}
mediaRecorder?.prepare()
try {
mediaRecorder?.start()
} catch (e: IllegalStateException) {
Log.e(TAG, "Exception starting capture: " + e.message, e)
}
}

fun setVideoSize(width: Int, height: Int) {
videoSize = Size(width, height)
}

fun setVideoQuality(quality: Int, orientation: Int) {
var profile: CamcorderProfile? = null
if (CamcorderProfile.hasProfile(quality)) {
profile = CamcorderProfile.get(quality)
}
if (profile == null) {
// Select a quality that is available on this device.
for (level in FALLBACK_QUALITY_LEVELS) {
if (CamcorderProfile.hasProfile(level)) {
profile = CamcorderProfile.get(level)
break
}
}
}
profile?.let {
if (orientation == Configuration.ORIENTATION_LANDSCAPE) {
setVideoSize(profile.videoFrameWidth, profile.videoFrameHeight)
} else {
setVideoSize(profile.videoFrameHeight, profile.videoFrameWidth)
}
setVideoCodec(profile.videoCodec)
setBitRate(profile.videoBitRate)
setFrameRate(profile.videoFrameRate)
}
}

fun setVideoCodec(videoCodec: Int) {
this.videoCodec = videoCodec
}

/**
* Toggles the state of video recording.
*
* @return true if recording is now active.
*/
fun onToggleRecord(fileDir: File): Boolean? {
if (recordingVideoFlag == true) {
stopRecordingVideo()
} else {
startRecordingVideo(fileDir)
}
return recordingVideoFlag
}

private fun startRecordingVideo(fileDir: File) {
if (mediaRecorder == null) {
mediaRecorder = MediaRecorder()
}
try {
buildFilename(fileDir)
setUpMediaRecorder()
} catch (e: IOException) {
Log.e(TAG, "Exception setting up recorder $e")
}

// Set up Surface for the MediaRecorder
encoderSurface = mediaRecorder?.surface
sceneView?.startMirroringToSurface(
encoderSurface, 0, 0, videoSize?.width!!, videoSize?.height!!
)

recordingVideoFlag = true
}

fun setBitRate(bitRate: Int) {
this.bitRate = bitRate
}

fun setFrameRate(frameRate: Int) {
this.frameRate = frameRate
}

fun setSceneView(sceneView: SceneView) {
this.sceneView = sceneView
}

fun isRecording(): Boolean? {
return recordingVideoFlag
}

public fun getVideoPath(): File? {
return videoPath
}

fun VideoRecorder() {
recordingVideoFlag = false
}

}

Here’s how we utilized our CaputreHelper to build this!

val captureHelper = CaptureHelper()

//To capture the surface view
val fileName = "file.jpg"
val imagesDir =
Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_PICTURES)
val filePath= File(imagesDir, filename)

captureHelper.takePhoto(
sceneView,
filePath
)


//To Record the video
captureHelper.setSceneView(
cameraSwitchArFragment?.arSceneView
)
val videoName = "capture.mp4"
val videosDir =
Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_PICTURES)
val videoFilePath = File(videosDir, videoName)

val orientation = resources.configuration.orientation
captureHelper.setVideoQuality(CamcorderProfile.QUALITY_720P, orientation)

//start recording
val isRecording = captureHelper.onToggleRecord(videoFilePath)

//calling toggleRecord will stop the recording

Challenges and Solutions
Building an AR-based Android application can be challenging, especially if you are new to AR development. Some of the common challenges that I faced while building my application included issues with motion tracking, lighting, and model placement. To overcome these challenges, I had to experiment with different settings and configurations, as well as consult the ARCore and SceneForm documentation and online forums for help.

Conclusion
In conclusion, building an AR-based Android application using ARCore and SceneForm can be a rewarding experience, but it requires a lot of hard work and dedication. By following the steps outlined in this article and experimenting with different features and settings, you can create an immersive AR experience that will captivate and engage your users. So, what are you waiting for? Start building your AR-based Android application today and unlock the full potential of AR technology!

I will also soon be publishing it on GeeksForGeeks!

Stackademic

Thank you for reading until the end. Before you go:

  • Please consider clapping and following the writer! 👏
  • Follow us on Twitter(X), LinkedIn, and YouTube.
  • Visit Stackademic.com to find out more about how we are democratizing free programming education around the world.

--

--