⛳
第4回 React+TypeScriptなWebアプリで、QRコードをARしてみた。(検出補助線を引く編)
<- [第3回]React+TypeScriptなWebアプリで、QRコードをARしてみた。(カメラ編)
[第5回]React+TypeScriptなWebアプリで、QRコードをARしてみた。(完成編) ->
Abstract
React+TypescriptのWebアプリで、ARを実装してみた。完結編。
今回は、カメラ映像から、QRコードを検出させてその上で3Dモデルを表示させる。
QRコードはトラッキングして3Dモデルを表示させるようにする。
結論
今回の成果物はココ↓
前提
- React+Typescriptの開発環境は構築済 [環境構築]WindowsにVSCode+React+TypeScriptの開発環境を構築してみた。
- 前回プロジェクトから。ReactTs-QrAr003
- Webカメラは準備しておく。
手順
1.プロジェクト生成 -> VSCodeで開く
このテンプレートコードから始める。ReactTs-QrAr003
git cloneとか、フォルダリネームとか
$ D:
$ cd .\Products\React.js\ # ご自身の適当なフォルダに読み替えてね。
$ rd /q /s D:\Products\React.js\ReactTs-QrAr003
$ git clone https://github.com/aaaa1597/ReactTs-QrAr003.git
$ rd /q /s "ReactTs-QrAr003/.git"
$ ren ReactTs-QrAr003 ReactTs-QrAr004
$ cd ReactTs-QrAr004
準備
コマンドプロンプト
$ npm install
実行してみる
コマンドプロンプト
$ npm start
ここまでで、一旦動作するのが確認できる。
実装
デバッグ用にcanvasを追加
デバッグ用の検出矩形を描画するためのcanvasを追加
App.tsx
+190: {/* デバッグ用canvas */}
+191: <canvas id="canvas" width="1920" height="1080" style={{ position: "absolute", left: "0px", top: "0px", background: "#0088ff44"}}></canvas>
デバッグ用にcanvas描画設定
デバッグ用の検出矩形を描画するための設定
検出したらsetPoints3/setPoints4を呼ぶ。
検出時、頂点が3つの時と4つの時があって、それぞれ呼び分けるように実装。
App.tsx
+66: {/* デバッグ用canvas */}
+67: const [context, setContext] = useState<CanvasRenderingContext2D>();
+68: const [points4, setPoints4] = useState<ResultPoint[]>();
+69: const [points3, setPoints3] = useState<ResultPoint[]>();
エラーハンドラは削除
使わんから。
App.tsx
- onError(ret) {
- console.log('onError::ret=', ret);
- },
- onDecodeError(ret) {
- console.log('onDecodeError::ret=', ret);
- },
検出処理
検出したら、矩形頂点を保存しておく。
App.tsx
+88: if(points.length%4 == 0)
+89: setPoints4(points);
+90: else if(points.length%3 == 0)
+91: setPoints3(points);
Canvasのコンテキストを保持
Canvasに何か描くときはコンテキスト取得しとかんといかんくって、保持っとく。
App.tsx
103: const canvas = document.getElementById("canvas") as HTMLCanvasElement
104: canvas.width = ref.current.videoWidth;
105: canvas.width = ref.current.videoWidth;
106: const context = canvas.getContext("2d")
107: if(!context) return;
108: setContext(context);
QRコード検出時の枠線描画
QRコード検出時したら、矩形の頂点が取得できる。そのタイミングで枠線描画。
App.tsx
+113: {/* デバッグ用 3点取れた時の確認 */}
+114: useEffect(() => {
+115: if(!context) return;
+116: if(!points3) return;
+117: /* 対角線の中点を求める */
+118: const xpwr = Math.abs(points3[2].getX() - points3[0].getX());
+119: const xm = Math.min(points3[2].getX() , points3[0].getX()) + xpwr/2;
+120: const ypwr = Math.abs(points3[2].getY() - points3[0].getY());
+121: const ym = Math.min(points3[2].getY() , points3[0].getY()) + ypwr/2;
+122: const m = new ResultPoint( xm, ym);
+123: context.clearRect(0, 0, 1920, 1080)
+124: /* 中点を描画 */
+125: context.beginPath();
+126: context.arc(m.getX(), m.getY(), 10, 0, 2*Math.PI, false);
+127: context.fillStyle = 'green';
+128: context.fill();
+129: context.stroke()
+130: /* 矩形 */
+131: context.beginPath()
+132: context.moveTo( points3[0].getX(), points3[0].getY())
+133: context.lineTo( points3[1].getX(), points3[1].getY())
+134: context.lineTo( points3[2].getX(), points3[2].getY())
+135: /* 中心線 */
+136: context.moveTo( points3[0].getX(), points3[0].getY())
+137: context.lineTo( m.getX(), m.getY())
+138: context.moveTo( points3[1].getX(), points3[1].getY())
+139: context.lineTo( m.getX(), m.getY())
+140: context.moveTo( points3[2].getX(), points3[2].getY())
+141: context.lineTo( m.getX(), m.getY())
+142: /* 描画 */
+143: context.strokeStyle = "red";
+144: context.lineWidth = 2;
+145: context.stroke()
+146: context.font = "48px serif";
+147: context.fillText("0",points3[0].getX(), points3[0].getY())
+148: context.fillText("1",points3[1].getX(), points3[1].getY())
+149: context.fillText("2",points3[2].getX(), points3[2].getY())
+150: }, [points3]);
151:
+152: {/* デバッグ用 4点取れた時の確認 */}
+153: useEffect(() => {
+154: if(!context) return;
+155: if(!points4) return;
+156: context.clearRect(0, 0, 1920, 1080)
+157: /* 中点を求める */
+158: const m = CrossLineLine( points4[0], points4[1], points4[2], points4[3]);
+159: /* 中点を描画 */
+160: context.beginPath();
+161: context.arc(m.getX(), m.getY(), 10, 0, 2*Math.PI, false);
+162: context.fillStyle = 'green';
+163: context.fill();
+164: context.stroke()
+165: /* 矩形 */
+166: context.beginPath()
+167: context.moveTo( points4[0].getX(), points4[0].getY())
+168: context.lineTo( points4[1].getX(), points4[1].getY())
+169: context.lineTo( points4[2].getX(), points4[2].getY())
+170: context.lineTo( points4[3].getX(), points4[3].getY())
+171: /* 対角線 */
+172: context.moveTo( points4[0].getX(), points4[0].getY())
+173: context.lineTo( points4[2].getX(), points4[2].getY())
+174: context.moveTo( points4[1].getX(), points4[1].getY())
+175: context.lineTo( points4[3].getX(), points4[3].getY())
+176: context.strokeStyle = "red";
+177: context.lineWidth = 2;
+178: context.stroke()
+179: context.font = "48px serif";
+180: context.fillText("0",points4[0].getX(), points4[0].getY())
+181: context.fillText("1",points4[1].getX(), points4[1].getY())
+182: context.fillText("2",points4[2].getX(), points4[2].getY())
+183: context.fillText("3",points4[3].getX(), points4[3].getY())
+184: }, [points4]);
中心点を求める関数の実装
2線分の交点をもとめる関数を実装。
App.tsx
+196: /**********************/
+197: /* 2線分の交点を求める */
+198: /* p1 ------ p2 */
+199: /* | | */
+200: /* | | */
+201: /* | | */
+202: /* p0 ------ p3 */
+203: /**********************/
+204: const CrossLineLine = (p00: ResultPoint, p01: ResultPoint, p02: ResultPoint, +p03: ResultPoint) => {
+205: const s1: number = ((p02.getX()-p00.getX())*(p01.getY()-p00.getY())-(p02.getY()-p00.getY())*(p01.getX()-p00.getX())) / 2.0;
+206: const s2: number = ((p02.getX()-p00.getX())*(p00.getY()-p03.getY())-(p02.getY()-p00.getY())*(p00.getX()-p03.getX())) / 2.0;
+207: const x: number = p01.getX()+(p03.getX()-p01.getX()) * s1 / (s1+s2);
+208: const y: number = p01.getY()+(p03.getY()-p01.getY()) * s1 / (s1+s2);
+209: return new ResultPoint( x, y);
+210: }
App.tsx
全体
App.tsx
1: import React, { useEffect, Suspense, useRef, useState, useMemo } from 'react';
2: import './App.css';
3: import { Canvas, useLoader, useFrame } from '@react-three/fiber'
4: import * as THREE from 'three'
5: import { OrbitControls, useFBX } from '@react-three/drei'
6: import { FBXLoader } from "three/examples/jsm/loaders/FBXLoader";
7: import { useZxing } from "react-zxing";
8: import ResultPoint from '@zxing/library/esm/core/ResultPoint';
9:
10: const FBXModel = (props:{setActionName: React.Dispatch<React.SetStateAction<string>>}) => {
11: /* FBXモデル読込み */
12: const fbx = useLoader(FBXLoader, "assets/Ch09_nonPBR.fbx");
13: /* AnimationClip(s)読込み */
14: const animCrips: THREE.AnimationClip[][] = []
15: animCrips[0] = useFBX('./assets/BreakdanceEnding2.fbx').animations
16: animCrips[1] = useFBX('./assets/BreakdanceUprockVar1.fbx').animations
17: animCrips[2] = useFBX('./assets/HipHopDancing.fbx').animations
18: animCrips[3] = useFBX('./assets/NorthernSoulSpin.fbx').animations
19: animCrips[4] = useFBX('./assets/SwingDancing.fbx').animations
20: animCrips[5] = useFBX('./assets/BreakdanceEnding1.fbx').animations
21: const animNames = ['BreakdanceEnding2', 'BreakdanceUprockVar1', 'HipHopDancing', 'NorthernSoulSpin', 'SwingDancing', 'BreakdanceEnding1']
22: /* 変数定義 */
23: const mixer = useRef<THREE.AnimationMixer>();
24: const [ animIdx, setAnimIdx ] = useState<number>(0);
25: const animActions = useMemo(() => [] as THREE.AnimationAction[], [])
26:
27: /* 初期化 */
28: useEffect(() => {
29: fbx.scale.multiplyScalar(0.02)
30: mixer.current = new THREE.AnimationMixer(fbx)
31: animCrips.forEach((val: THREE.AnimationClip[], idx: number) => {
32: if(!mixer.current) return;
33: animActions[idx] = mixer.current.clipAction(val[0])
34: })
35: new Promise(() => setTimeout(() => {0}, 1000)).then(()=>animActions[0].play())
36: }, [])
37:
38: /* モーション切替え処理 */
39: useEffect(() => {
40: const act: THREE.AnimationAction = animActions[animIdx]
41: act?.reset().fadeIn(0.3).play()
42: props.setActionName(animNames[animIdx] + ' : ' + animIdx)
43: return () => {
44: act?.fadeOut(0.3)
45: }
46: }, [animIdx])
47:
48: /* FPS処理 */
49: useFrame((state, delta) => {
50: if(mixer.current)
51: mixer.current.update(delta);
52: const durationtime: number= animActions[animIdx].getClip().duration
53: const currenttime: number = animActions[animIdx].time
54: if(currenttime/durationtime > 0.9/*90%を超えたら次のモーションへ*/) {
55: const index: number = (animIdx+1) % (animCrips.length)
56: setAnimIdx( index )
57: }
58: });
59:
60: return (
61: <primitive object={fbx} position={[1, -1, 1]} />
62: )
63: }
64:
65: const ZxingQRCodeReader = (props:{setSize: React.Dispatch<React.SetStateAction<React.CSSProperties>>}) => {
+66: {/* デバッグ用canvas */}
+67: const [context, setContext] = useState<CanvasRenderingContext2D>();
+68: const [points4, setPoints4] = useState<ResultPoint[]>();
+69: const [points3, setPoints3] = useState<ResultPoint[]>();
70:
71: const { ref } = useZxing({
72: constraints: {
73: audio: false,
74: video: {
75: facingMode: 'environment',
76: width: { min: 1024, ideal: 1920, max: 1920 },
77: height: { min: 576, ideal: 1080, max: 1080 },
78: },
79: },
+80: timeBetweenDecodingAttempts: 100,
- onError(ret) {
- console.log('onError::ret=', ret);
- },
- onDecodeError(ret) {
- console.log('onDecodeError::ret=', ret);
- },
81: onDecodeResult(result) {
82: console.log('onDecodeResult::result=', result);
83: if(result.getResultPoints().length <= 0) return;
84:
85: // setResult(result.getText());
86:
87: const points: ResultPoint[] = result.getResultPoints()
+88: if(points.length%4 == 0)
+89: setPoints4(points);
+90: else if(points.length%3 == 0)
+91: setPoints3(points);
92: console.log(points.length, " -----[0]: ", points[0]?.getX(), " ,", points[0]?.getY(),)
93: console.log(points.length, " -----[1]: ", points[1]?.getX(), " ,", points[1]?.getY(),)
94: console.log(points.length, " -----[2]: ", points[2]?.getX(), " ,", points[2]?.getY(),)
95: console.log(points.length, " -----[3]: ", points[3]?.getX(), " ,", points[3]?.getY(),)
96: },
97: });
98:
99: /* Videoサイズ変更に合わせてCanvasサイズを変更する */
100: useEffect(() => {
101: if(!ref.current) return;
102: props.setSize({width: ref.current.videoWidth, height: ref.current.videoHeight});
+103: const canvas = document.getElementById("canvas") as HTMLCanvasElement
+104: canvas.width = ref.current.videoWidth;
+105: canvas.width = ref.current.videoWidth;
+106: const context = canvas.getContext("2d")
+107: if(!context) return;
+108: setContext(context);
109: }, [ref.current?.videoWidth, ref.current?.videoHeight]);
110:
111: console.log("ref.current?.videoxxx=(", ref.current?.videoWidth, ",", ref.current?.videoHeight, ")" );
112:
+113: {/* デバッグ用 3点取れた時の確認 */}
+114: useEffect(() => {
+115: if(!context) return;
+116: if(!points3) return;
+117: /* 対角線の中点を求める */
+118: const xpwr = Math.abs(points3[2].getX() - points3[0].getX());
+119: const xm = Math.min(points3[2].getX() , points3[0].getX()) + xpwr/2;
+120: const ypwr = Math.abs(points3[2].getY() - points3[0].getY());
+121: const ym = Math.min(points3[2].getY() , points3[0].getY()) + ypwr/2;
+122: const m = new ResultPoint( xm, ym);
+123: context.clearRect(0, 0, 1920, 1080)
+124: /* 中点を描画 */
+125: context.beginPath();
+126: context.arc(m.getX(), m.getY(), 10, 0, 2*Math.PI, false);
+127: context.fillStyle = 'green';
+128: context.fill();
+129: context.stroke()
+130: /* 矩形 */
+131: context.beginPath()
+132: context.moveTo( points3[0].getX(), points3[0].getY())
+133: context.lineTo( points3[1].getX(), points3[1].getY())
+134: context.lineTo( points3[2].getX(), points3[2].getY())
+135: /* 中心線 */
+136: context.moveTo( points3[0].getX(), points3[0].getY())
+137: context.lineTo( m.getX(), m.getY())
+138: context.moveTo( points3[1].getX(), points3[1].getY())
+139: context.lineTo( m.getX(), m.getY())
+140: context.moveTo( points3[2].getX(), points3[2].getY())
+141: context.lineTo( m.getX(), m.getY())
+142: /* 描画 */
+143: context.strokeStyle = "red";
+144: context.lineWidth = 2;
+145: context.stroke()
+146: context.font = "48px serif";
+147: context.fillText("0",points3[0].getX(), points3[0].getY())
+148: context.fillText("1",points3[1].getX(), points3[1].getY())
+149: context.fillText("2",points3[2].getX(), points3[2].getY())
+150: }, [points3]);
151:
+152: {/* デバッグ用 4点取れた時の確認 */}
+153: useEffect(() => {
+154: if(!context) return;
+155: if(!points4) return;
+156: context.clearRect(0, 0, 1920, 1080)
+157: /* 中点を求める */
+158: const m = CrossLineLine( points4[0], points4[1], points4[2], points4[3]);
+159: /* 中点を描画 */
+160: context.beginPath();
+161: context.arc(m.getX(), m.getY(), 10, 0, 2*Math.PI, false);
+162: context.fillStyle = 'green';
+163: context.fill();
+164: context.stroke()
+165: /* 矩形 */
+166: context.beginPath()
+167: context.moveTo( points4[0].getX(), points4[0].getY())
+168: context.lineTo( points4[1].getX(), points4[1].getY())
+169: context.lineTo( points4[2].getX(), points4[2].getY())
+170: context.lineTo( points4[3].getX(), points4[3].getY())
+171: /* 対角線 */
+172: context.moveTo( points4[0].getX(), points4[0].getY())
+173: context.lineTo( points4[2].getX(), points4[2].getY())
+174: context.moveTo( points4[1].getX(), points4[1].getY())
+175: context.lineTo( points4[3].getX(), points4[3].getY())
+176: context.strokeStyle = "red";
+177: context.lineWidth = 2;
+178: context.stroke()
+179: context.font = "48px serif";
+180: context.fillText("0",points4[0].getX(), points4[0].getY())
+181: context.fillText("1",points4[1].getX(), points4[1].getY())
+182: context.fillText("2",points4[2].getX(), points4[2].getY())
+183: context.fillText("3",points4[3].getX(), points4[3].getY())
+184: }, [points4]);
185:
186:
187: return (
188: <>
189: <video ref={ref} />
+190: {/* デバッグ用canvas */}
+191: <canvas id="canvas" width="1920" height="1080" style={{ position: "absolute", left: "0px", top: "0px", background: "#0088ff44"}}></canvas>
192: </>
193: );
194: };
195:
+196: /**********************/
+197: /* 2線分の交点を求める */
+198: /* p1 ------ p2 */
+199: /* | | */
+200: /* | | */
+201: /* | | */
+202: /* p0 ------ p3 */
+203: /**********************/
+204: const CrossLineLine = (p00: ResultPoint, p01: ResultPoint, p02: ResultPoint, p03: ResultPoint) => {
+205: const s1: number = ((p02.getX()-p00.getX())*(p01.getY()-p00.getY())-(p02.getY()-p00.getY())*(p01.getX()-p00.getX())) / 2.0;
+206: const s2: number = ((p02.getX()-p00.getX())*(p00.getY()-p03.getY())-(p02.getY()-p00.getY())*(p00.getX()-p03.getX())) / 2.0;
+207: const x: number = p01.getX()+(p03.getX()-p01.getX()) * s1 / (s1+s2);
+208: const y: number = p01.getY()+(p03.getY()-p01.getY()) * s1 / (s1+s2);
+209: return new ResultPoint( x, y);
+210: }
211:
212: const App = () => {
213: const [actionName, setActionName] = useState<string>('aaabbb');
214: const [size, setSize] = useState<React.CSSProperties>({width: "300px", height: "200px"});
215:
216: return (
217: <div>
218: <ZxingQRCodeReader setSize={setSize}/>
219: <Canvas camera={{ position: [3, 1, 3] }} style={{ position: "absolute", left: "0px", top: "0px", width: `${size.width}px`, height: `${size.height}px`,}}>
220: <ambientLight intensity={2} />
221: <pointLight position={[40, 40, 40]} />
222: <Suspense fallback={null}>
223: <FBXModel setActionName={setActionName}/>
224: </Suspense>
225: <OrbitControls />
226: <axesHelper args={[5]} />
227: <gridHelper />
228: </Canvas>
229: <div id="summry" style={{background: "rgba(255, 192, 192, 0.7)"}}>{actionName}</div>
230: </div>
231: );
232: }
233:
234: export default App;
235:
で、実行。
出来た!!
QRコードの上に枠線描けてる。
<- [第3回]React+TypeScriptなWebアプリで、QRコードをARしてみた。(カメラ編)
[第5回]React+TypeScriptなWebアプリで、QRコードをARしてみた。(完成編) ->
Discussion