Compare commits
4 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
31f59ba4a2 | ||
|
|
b848154942 | ||
|
|
79db6adf45 | ||
|
|
a06e473e85 |
90
README.md
90
README.md
@@ -7,6 +7,11 @@
|
||||
## ✨ Features
|
||||
|
||||
* **🎙️ Dual-Channel Recording**: seamlessly capture your voice and meeting audio from apps like Microsoft Teams, Zoom, or Google Meet.
|
||||
* **📁 Import Audio Files**: Upload existing recordings (MP3, MP4, WAV, M4A, FLAC, OGG, AAC, WMA) for transcription and summarization.
|
||||
* **⏱️ Long Meeting Support**: Record meetings up to 2+ hours with automatic MP3 conversion and chunking.
|
||||
* **🎵 Smart Auto-Stop**:
|
||||
* **Universal Auto-Stop**: Automatically stops recording after **20 seconds of silence** in ALL modes (Voice Memo & Meeting).
|
||||
* **Noise Filtering**: Enhanced VAD (Voice Activity Detection) ignores background noise and keyboard typing, only triggering on clear speech.
|
||||
* **📅 Microsoft 365 Integration**:
|
||||
* **Upcoming Meetings**: View your daily schedule and join with **one click**.
|
||||
* **Meeting Details**: View full agenda and **invited attendee status** (Accepted/Declined).
|
||||
@@ -16,14 +21,23 @@
|
||||
* **Precision Transcription**: Standard-compliant formatting with **second-by-second timestamps**.
|
||||
* **Smart Summaries**: Uses **Smart Templates** to automatically select the best format (Business Protocol vs. 1:1) based on meeting content.
|
||||
* **🔇 Smart VAD**: Automatically filters out silence and stops recording when you stop talking.
|
||||
* **🎨 White-Labeling**: Upload your **custom company logo** in Settings to brand the application.
|
||||
* **🔒 Privacy-First**: Data is processed securely via your own Infomaniak API keys.
|
||||
|
||||
---
|
||||
|
||||
## 🚀 Getting Started
|
||||
|
||||
### 1. Prerequisites
|
||||
* **macOS** (Apple Silicon or Intel).
|
||||
### Required
|
||||
|
||||
* **macOS** (tested on macOS Monterey and later)
|
||||
* **BlackHole 2ch Driver** ([Download here](https://existential.audio/blackhole/))
|
||||
* **MANDATORY** for system audio capture (MS Teams, Zoom, etc.)
|
||||
* Without this, you can only record microphone input
|
||||
* **ffmpeg** for audio processing
|
||||
```bash
|
||||
brew install ffmpeg
|
||||
```
|
||||
* **Infomaniak AI Account**: You need an API Key and Product ID from the [Infomaniak Developer Portal](https://manager.infomaniak.com/).
|
||||
|
||||
### 2. Installation
|
||||
@@ -35,15 +49,21 @@
|
||||
|
||||
## 🎧 Recording System Audio (Teams, Zoom, etc.)
|
||||
|
||||
We've made this easy! Hearbit AI includes a built-in helper to set up your audio devices.
|
||||
We've made this easy! **Note: You must have the BlackHole driver installed.**
|
||||
|
||||
1. **Open Audio MIDI Setup**: Click the "Open Audio MIDI Setup" button in the recorder view.
|
||||
2. **Create "Hearbit Audio" Device**:
|
||||
* If you don't have a virtual device, click **"🪄 Create Hearbit Audio Device"** in the app (appears in Meeting mode if no device is found).
|
||||
* This will automatically configure a Multi-Output Device so you can record and hear at the same time.
|
||||
3. **Select "Hearbit Audio" in Teams/Zoom**:
|
||||
* In your meeting app settings (Teams/Zoom), set your **Speaker** to **Hearbit Audio**.
|
||||
* In Hearbit AI, select **Hearbit Audio** (or BlackHole) as your input.
|
||||
1. **Create "Hearbit Audio" Device**:
|
||||
* Open the app and select **Meeting** mode.
|
||||
* If you don't have the device yet, click the **"🪄 Create Hearbit Audio Device"** button.
|
||||
* This creates a specialized "Multi-Output Device" that routes audio to both your headphones/speakers AND the app.
|
||||
|
||||
2. **Configure Teams / Zoom / Webex**:
|
||||
* **Speaker / Output**: Change this to **Hearbit Audio**.
|
||||
* *Why?* This ensures the audio goes to the recording app *and* your ears.
|
||||
* **Microphone / Input**: Leave this as your normal microphone (e.g., MacBook Pro Mic).
|
||||
* *Note:* Do **not** select Hearbit Audio as your microphone in Teams.
|
||||
|
||||
3. **Start Recording**:
|
||||
* In Hearbit AI, ensure **Hearbit Audio** is selected as the input.
|
||||
|
||||
---
|
||||
|
||||
@@ -72,6 +92,37 @@ We've made this easy! Hearbit AI includes a built-in helper to set up your audio
|
||||
|
||||
---
|
||||
|
||||
## 🎨 Custom Branding (White-Labeling)
|
||||
|
||||
You can replace the default Livtec logo with your own company branding:
|
||||
|
||||
1. Go to **Settings** (gear icon) → **Branding**.
|
||||
2. Click **Upload Logo**.
|
||||
3. Select your file (PNG, JPG, SVG).
|
||||
4. The content changes immediately across the app.
|
||||
5. *Tip*: Use a transparent PNG for best results.
|
||||
|
||||
---
|
||||
|
||||
## 📧 Advanced Email Templates
|
||||
|
||||
The email system supports **full HTML & JavaScript** templates. This allows for dynamic dashboards, charts, and interactive reports.
|
||||
|
||||
**How to use:**
|
||||
1. Go to **Settings** → **Email**.
|
||||
2. Create a new template.
|
||||
3. Use `{{summary}}` as a placeholder for the raw AI JSON output.
|
||||
4. In your HTML/Script, parse it:
|
||||
```javascript
|
||||
const reportData = {{summary}};
|
||||
// Now you can use reportData.todos, reportData.updates, etc.
|
||||
```
|
||||
5. Use `{{date}}` for the current date and `{{subject}}` for the meeting title.
|
||||
|
||||
*Example*: Create a "Daily Standup Dashboard" that visualizes Blocker/Updates/Todos in a grid layout.
|
||||
|
||||
---
|
||||
|
||||
## ❓ Troubleshooting
|
||||
|
||||
### "Hearbit AI is damaged and can't be opened"
|
||||
@@ -85,6 +136,25 @@ This is a standard macOS warning for apps not signed with an Apple Developer Cer
|
||||
3. Enter your password.
|
||||
4. Open the app again.
|
||||
|
||||
### Long Meetings (> 1 hour)
|
||||
|
||||
**Automatic Handling**: The app automatically handles long recordings:
|
||||
- **MP3 Conversion**: All recordings are converted to MP3 (64kbps) for 10x compression
|
||||
- **Chunking**: Files ≥18 MB are automatically split into 10-minute segments
|
||||
- **Processing**: Each segment is transcribed separately and merged with timestamps
|
||||
|
||||
**Example**: A 2-hour meeting:
|
||||
1. Records as WAV (~120 MB)
|
||||
2. Converts to MP3 (~12 MB)
|
||||
3. Stays under limit → No chunking needed!
|
||||
|
||||
**Very long meetings** (e.g., all-day workshops):
|
||||
- Automatically chunks into segments
|
||||
- Shows progress: "Processing chunk 1/15..."
|
||||
- Merges all transcriptions seamlessly
|
||||
|
||||
### No Audio / Can't Hear Meeting Participants
|
||||
|
||||
---
|
||||
|
||||
## 👨💻 Development
|
||||
|
||||
81
RELEASE_NOTES_1.1.0.md
Normal file
81
RELEASE_NOTES_1.1.0.md
Normal file
@@ -0,0 +1,81 @@
|
||||
# Release Notes - Version 1.1.0
|
||||
|
||||
**Release Date**: January 21, 2026
|
||||
|
||||
## 🎉 What's New
|
||||
|
||||
### Import Audio Files Feature
|
||||
|
||||
We've added a powerful new **Import** tab that allows you to upload and process existing audio/video files!
|
||||
|
||||
**Key Features:**
|
||||
- **Drag-and-Drop Upload**: Simply drag your audio files into the app
|
||||
- **8 Format Support**: MP3, MP4, WAV, M4A, FLAC, OGG, AAC, WMA
|
||||
- **Smart Metadata Display**: See file duration, size, and format before processing
|
||||
- **Editable Meeting Titles**: Customize the name (defaults to filename)
|
||||
- **Progress Tracking**: Visual indicators for each stage (Validating → Transcribing → Summarizing)
|
||||
- **Same AI Power**: Uses the same AI templates and Smart Select as live recordings
|
||||
- **Auto-Navigation**: Seamlessly transition to Transcription view when complete
|
||||
|
||||
**Use Cases:**
|
||||
- Process pre-recorded meetings you forgot to record live
|
||||
- Batch process voice memos
|
||||
- Import recordings from other devices
|
||||
- Archive and transcribe old meeting recordings
|
||||
|
||||
---
|
||||
|
||||
## 📝 Documentation Updates
|
||||
|
||||
### README Enhancements
|
||||
- Added mandatory **BlackHole 2ch Driver** requirement to Prerequisites
|
||||
- Clarified **Teams/Zoom configuration** (Speaker vs. Microphone settings)
|
||||
- Added detailed setup instructions for meeting audio capture
|
||||
|
||||
---
|
||||
|
||||
## 🔧 Technical Improvements
|
||||
|
||||
- Added `get_audio_metadata` Rust command for file metadata extraction
|
||||
- Improved tab navigation with new Import tab
|
||||
- Enhanced error handling for file validation
|
||||
- Code optimizations and cleanup
|
||||
|
||||
---
|
||||
|
||||
## 📦 Installation
|
||||
|
||||
Download the DMG file:
|
||||
```
|
||||
Hearbit_AI_1.1.0_aarch64.dmg
|
||||
```
|
||||
|
||||
**Location**: `src-tauri/target/release/bundle/dmg/`
|
||||
|
||||
### First-time Installation
|
||||
If you see "Hearbit AI is damaged and can't be opened":
|
||||
```bash
|
||||
sudo xattr -cr /Applications/Hearbit\ AI.app
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🐛 Known Issues
|
||||
|
||||
None reported for this release.
|
||||
|
||||
---
|
||||
|
||||
## 🙏 Credits
|
||||
|
||||
Built with ❤️ by the Livtec team using Tauri, React, and TypeScript.
|
||||
|
||||
---
|
||||
|
||||
## What's Next?
|
||||
|
||||
Potential future enhancements:
|
||||
- Meeting auto-stop when meeting ends (via M365 API)
|
||||
- Batch file import
|
||||
- Audio preview player
|
||||
- More audio format conversions
|
||||
@@ -1,7 +1,7 @@
|
||||
{
|
||||
"name": "hearbit-ai",
|
||||
"private": true,
|
||||
"version": "1.1.0",
|
||||
"version": "1.1.1",
|
||||
"type": "module",
|
||||
"scripts": {
|
||||
"dev": "vite",
|
||||
|
||||
67
src-tauri/Cargo.lock
generated
67
src-tauri/Cargo.lock
generated
@@ -1739,8 +1739,9 @@ checksum = "841d1cc9bed7f9236f321df977030373f4a4163ae1a7dbfe1a51a2c1a51d9100"
|
||||
|
||||
[[package]]
|
||||
name = "hearbit-ai"
|
||||
version = "1.1.0"
|
||||
version = "0.1.2"
|
||||
dependencies = [
|
||||
"base64 0.22.1",
|
||||
"chrono",
|
||||
"cpal",
|
||||
"hound",
|
||||
@@ -1757,6 +1758,7 @@ dependencies = [
|
||||
"tauri-plugin-log",
|
||||
"tauri-plugin-oauth",
|
||||
"tauri-plugin-opener",
|
||||
"tauri-plugin-shell",
|
||||
"tokio",
|
||||
"url",
|
||||
"voice_activity_detector",
|
||||
@@ -3089,6 +3091,16 @@ dependencies = [
|
||||
"ureq",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "os_pipe"
|
||||
version = "1.2.3"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "7d8fae84b431384b68627d0f9b3b1245fcf9f46f6c0e3dc902e9dce64edd1967"
|
||||
dependencies = [
|
||||
"libc",
|
||||
"windows-sys 0.61.2",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pango"
|
||||
version = "0.18.3"
|
||||
@@ -4361,12 +4373,44 @@ dependencies = [
|
||||
"digest",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "shared_child"
|
||||
version = "1.1.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "1e362d9935bc50f019969e2f9ecd66786612daae13e8f277be7bfb66e8bed3f7"
|
||||
dependencies = [
|
||||
"libc",
|
||||
"sigchld",
|
||||
"windows-sys 0.60.2",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "shlex"
|
||||
version = "1.3.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "0fda2ff0d084019ba4d7c6f371c95d8fd75ce3524c3cb8fb653a3023f6323e64"
|
||||
|
||||
[[package]]
|
||||
name = "sigchld"
|
||||
version = "0.2.4"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "47106eded3c154e70176fc83df9737335c94ce22f821c32d17ed1db1f83badb1"
|
||||
dependencies = [
|
||||
"libc",
|
||||
"os_pipe",
|
||||
"signal-hook",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "signal-hook"
|
||||
version = "0.3.18"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "d881a16cf4426aa584979d30bd82cb33429027e42122b169753d6ef1085ed6e2"
|
||||
dependencies = [
|
||||
"libc",
|
||||
"signal-hook-registry",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "signal-hook-registry"
|
||||
version = "1.4.8"
|
||||
@@ -4951,6 +4995,27 @@ dependencies = [
|
||||
"zbus",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "tauri-plugin-shell"
|
||||
version = "2.3.4"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "39b76f884a3937e04b631ffdc3be506088fa979369d25147361352f2f352e5ed"
|
||||
dependencies = [
|
||||
"encoding_rs",
|
||||
"log",
|
||||
"open",
|
||||
"os_pipe",
|
||||
"regex",
|
||||
"schemars 0.8.22",
|
||||
"serde",
|
||||
"serde_json",
|
||||
"shared_child",
|
||||
"tauri",
|
||||
"tauri-plugin",
|
||||
"thiserror 2.0.18",
|
||||
"tokio",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "tauri-runtime"
|
||||
version = "2.9.2"
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
[package]
|
||||
name = "hearbit-ai"
|
||||
version = "1.1.0"
|
||||
version = "0.1.2"
|
||||
description = "A Tauri App"
|
||||
authors = ["you"]
|
||||
edition = "2021"
|
||||
@@ -18,7 +18,7 @@ crate-type = ["staticlib", "cdylib", "rlib"]
|
||||
tauri-build = { version = "2", features = [] }
|
||||
|
||||
[dependencies]
|
||||
tauri = { version = "2", features = [] }
|
||||
tauri = { version = "2", features = ["tray-icon"] }
|
||||
tauri-plugin-opener = "2"
|
||||
tauri-plugin-dialog = "2"
|
||||
serde = { version = "1", features = ["derive"] }
|
||||
@@ -36,3 +36,5 @@ oauth2 = "4.4"
|
||||
url = "2.5"
|
||||
lettre = { version = "0.11", features = ["tokio1", "tokio1-native-tls", "builder"] }
|
||||
tauri-plugin-log = "2.0.0"
|
||||
tauri-plugin-shell = "2.3.4"
|
||||
base64 = "0.22"
|
||||
|
||||
@@ -110,6 +110,9 @@ func createAggregateDevice() {
|
||||
}
|
||||
print("Found BlackHole 2ch (ID: \(blackHoleID))")
|
||||
|
||||
// --- PART 1: Hearbit Audio (Input: Mic + BlackHole) ---
|
||||
print("\n--- Creating 'Hearbit Audio' (Input) ---")
|
||||
|
||||
// Default Input
|
||||
var defaultInputID: AudioObjectID = 0
|
||||
var size = UInt32(MemoryLayout<AudioObjectID>.size)
|
||||
@@ -125,19 +128,14 @@ func createAggregateDevice() {
|
||||
}
|
||||
print("Found Default Input (ID: \(defaultInputID))")
|
||||
|
||||
// Check for existing "Hearbit Audio" by UID
|
||||
let targetUID = "hearbit_audio_aggregate_v1"
|
||||
if let existingID = findDeviceByUID(targetUID) {
|
||||
print("Found existing Hearbit Audio device (ID: \(existingID)). Destroying to recreate...")
|
||||
if AudioHardwareDestroyAggregateDevice(existingID) != noErr {
|
||||
print("Warning: Failed to destroy existing device.")
|
||||
} else {
|
||||
print("Existing device destroyed.")
|
||||
}
|
||||
// Check for existing "Hearbit Audio"
|
||||
let inputUID = "hearbit_audio_aggregate_v1"
|
||||
if let existingID = findDeviceByUID(inputUID) {
|
||||
print("Found existing Hearbit Audio (ID: \(existingID)). Destroying...")
|
||||
AudioHardwareDestroyAggregateDevice(existingID)
|
||||
Thread.sleep(forTimeInterval: 0.5)
|
||||
}
|
||||
|
||||
// Build SubDevice List
|
||||
guard let bhUID = getStringProperty(objectID: blackHoleID, selector: kAudioDevicePropertyDeviceUID) else {
|
||||
print("Error: Could not get BlackHole UID.")
|
||||
exit(1)
|
||||
@@ -147,36 +145,47 @@ func createAggregateDevice() {
|
||||
exit(1)
|
||||
}
|
||||
|
||||
// Dedup: if Mic IS BlackHole (user set BlackHole as default), don't duplicate
|
||||
var subDevicesUIDs = [bhUID]
|
||||
if micUID != bhUID {
|
||||
subDevicesUIDs.append(micUID)
|
||||
}
|
||||
|
||||
let subDevicesArray = subDevicesUIDs.map {
|
||||
[kAudioSubDeviceUIDKey: $0]
|
||||
}
|
||||
|
||||
let desc: [String: Any] = [
|
||||
let subDevicesArray = subDevicesUIDs.map { [kAudioSubDeviceUIDKey: $0] }
|
||||
let inputDesc: [String: Any] = [
|
||||
kAudioAggregateDeviceNameKey: "Hearbit Audio",
|
||||
kAudioAggregateDeviceUIDKey: targetUID,
|
||||
kAudioAggregateDeviceUIDKey: inputUID,
|
||||
kAudioAggregateDeviceIsPrivateKey: Int(0),
|
||||
kAudioAggregateDeviceIsStackedKey: Int(0),
|
||||
kAudioAggregateDeviceSubDeviceListKey: subDevicesArray
|
||||
]
|
||||
|
||||
print("Creating Aggregate Device with UIDs: \(subDevicesUIDs)")
|
||||
|
||||
var outID: AudioObjectID = 0
|
||||
let err = AudioHardwareCreateAggregateDevice(desc as CFDictionary, &outID)
|
||||
|
||||
if err == noErr {
|
||||
print("Success! Created 'Hearbit Audio' with ID: \(outID)")
|
||||
exit(0)
|
||||
var outInputID: AudioObjectID = 0
|
||||
let errIn = AudioHardwareCreateAggregateDevice(inputDesc as CFDictionary, &outInputID)
|
||||
if errIn == noErr {
|
||||
print("Success! Created 'Hearbit Audio' with ID: \(outInputID)")
|
||||
} else {
|
||||
print("Failed to create device. Error code: \(err) (\(err.fourCC))")
|
||||
exit(1)
|
||||
print("Failed to create 'Hearbit Audio'. Error: \(errIn)")
|
||||
}
|
||||
|
||||
|
||||
// --- PART 2: Cleanup Unstable "Hearbit Speakers" ---
|
||||
// The previous "Hearbit Speakers" device caused MS Teams to crash.
|
||||
// We strictly remove it here to restore stability.
|
||||
print("\n--- Cleaning up Unstable Devices ---")
|
||||
let stopOutputUID = "hearbit_speakers_aggregate_v1"
|
||||
if let existingOutID = findDeviceByUID(stopOutputUID) {
|
||||
print("Found unstable 'Hearbit Speakers' (ID: \(existingOutID)). Removing to fix Teams crash...")
|
||||
let errDist = AudioHardwareDestroyAggregateDevice(existingOutID)
|
||||
if errDist == noErr {
|
||||
print("Successfully removed unstable device.")
|
||||
} else {
|
||||
print("Warning: Failed to remove device. Error: \(errDist)")
|
||||
}
|
||||
} else {
|
||||
print("No unstable 'Hearbit Speakers' found. System is clean.")
|
||||
}
|
||||
|
||||
exit(0)
|
||||
}
|
||||
|
||||
createAggregateDevice()
|
||||
|
||||
@@ -11,6 +11,9 @@ pub struct AudioProcessor {
|
||||
vad_chunk_size: usize,
|
||||
vad_buffer: Vec<f32>,
|
||||
|
||||
// Audio Config
|
||||
channel_count: u16,
|
||||
|
||||
// Resampler
|
||||
resampler: FastFixedIn<f32>,
|
||||
resample_input_buffer: Vec<f32>,
|
||||
@@ -21,6 +24,9 @@ pub struct AudioProcessor {
|
||||
last_speech_time: u64, // In samples or frames
|
||||
hangover_samples: u64,
|
||||
|
||||
// Waiting Mode
|
||||
waiting_for_speech: bool,
|
||||
|
||||
// Ring Buffer (for pre-roll)
|
||||
ring_buffer: Vec<f32>,
|
||||
ring_pos: usize,
|
||||
@@ -38,11 +44,13 @@ pub struct AudioProcessor {
|
||||
impl AudioProcessor {
|
||||
pub fn new(
|
||||
sample_rate: u32,
|
||||
channel_count: u16,
|
||||
writer: Arc<Mutex<WavWriter<std::io::BufWriter<std::fs::File>>>>,
|
||||
app_handle: AppHandle
|
||||
app_handle: AppHandle,
|
||||
wait_for_speech: bool
|
||||
) -> Result<Self, String> {
|
||||
let vad_sample_rate = 16000;
|
||||
let vad_chunk_size = 512; // Silero usually likes ~30ms which is 512 at 16k? No 16000 * 0.032 = 512.
|
||||
let vad_chunk_size = 512;
|
||||
|
||||
// Initialize VAD
|
||||
let vad = VoiceActivityDetector::builder()
|
||||
@@ -51,8 +59,7 @@ impl AudioProcessor {
|
||||
.build()
|
||||
.map_err(|e| format!("Failed to init VAD: {:?}", e))?;
|
||||
|
||||
// Initialize Resampler (Input Rate -> 16000) using FastFixedIn for speed/simplicity
|
||||
// new(f_ratio, max_resample_ratio_relative, polyn_deg, chunk_size, channels)
|
||||
// Initialize Resampler (Input Rate -> 16000)
|
||||
let resampler = FastFixedIn::<f32>::new(
|
||||
16000.0 / sample_rate as f64,
|
||||
1.0,
|
||||
@@ -61,20 +68,26 @@ impl AudioProcessor {
|
||||
1
|
||||
).map_err(|e| format!("Failed to init Resampler: {:?}", e))?;
|
||||
|
||||
// Pre-roll buffer (e.g. 0.5 seconds of high quality audio)
|
||||
// Pre-roll buffer (1.0 seconds) * Channels (interleaved store)
|
||||
let ring_curr_seconds = 1.0;
|
||||
let ring_size = (sample_rate as f32 * ring_curr_seconds) as usize;
|
||||
// WavWriter writes interleaved, so we store interleaved.
|
||||
let ring_size = (sample_rate as f32 * ring_curr_seconds) as usize * channel_count as usize;
|
||||
|
||||
Ok(Self {
|
||||
vad,
|
||||
vad_chunk_size,
|
||||
vad_buffer: Vec::new(),
|
||||
channel_count,
|
||||
resampler,
|
||||
resample_input_buffer: Vec::new(),
|
||||
resample_output_buffer: Vec::new(),
|
||||
is_speech_active: false,
|
||||
last_speech_time: 0,
|
||||
hangover_samples: (sample_rate as f32 * 1.5) as u64, // 1.5s hangover
|
||||
// Hangover counts "processed samples" which are actually frames * channels in current logic?
|
||||
// Actually total_processed_samples usually counts FRAMES in audio terminology, but here we count elements.
|
||||
// Let's stick to elements to match existing logic logic.
|
||||
hangover_samples: (sample_rate as f32 * 1.5 * channel_count as f32) as u64,
|
||||
waiting_for_speech: wait_for_speech,
|
||||
ring_buffer: vec![0.0; ring_size],
|
||||
ring_pos: 0,
|
||||
ring_size,
|
||||
@@ -87,30 +100,39 @@ impl AudioProcessor {
|
||||
}
|
||||
|
||||
pub fn process(&mut self, data: &[f32]) {
|
||||
// 1. Add to Ring Buffer (always, for pre-roll)
|
||||
// 1. Add to Ring Buffer (Interleaved data - Record EVERYTHING)
|
||||
for &sample in data {
|
||||
self.ring_buffer[self.ring_pos] = sample;
|
||||
self.ring_pos = (self.ring_pos + 1) % self.ring_size;
|
||||
}
|
||||
|
||||
// 2. Resample for VAD
|
||||
// We append new data to input buffer for resampler
|
||||
self.resample_input_buffer.extend_from_slice(data);
|
||||
// 2. Prepare VAD Signal (Mono Mixdown)
|
||||
// FRESH START LOGIC (v0.2.0):
|
||||
// We expect standard Stereo Input (BlackHole 2ch).
|
||||
// No magic 3-channel aggregate.
|
||||
|
||||
// Process in chunks compatible with resampler
|
||||
// Actually rubato process_into_buffer needs waves of input.
|
||||
// Simplified: SincFixedIn wants a fixed number of input frames?
|
||||
// Docs: "retrieve result... input buffer must contain needed number of frames"
|
||||
// SincFixedIn: "input buffer used for resampling... must receive a fixed number of frames"
|
||||
// Wait, SincFixedIn is fixed INPUT size. SincFixedOut is fixed OUTPUT size.
|
||||
// We want to feed whatever we get.
|
||||
// For simplicity, let's use a simpler resampling strategy or accept rubato's constraints.
|
||||
// Rubato SincFixedIn: we must provide `input_frames_next` frames.
|
||||
let channels = self.channel_count as usize;
|
||||
let frame_count = data.len() / channels;
|
||||
let mut vad_input_chunk = Vec::with_capacity(frame_count);
|
||||
|
||||
// Let's defer strict resampling and just use decimation if sample rate is multiple?
|
||||
// No, user devices vary.
|
||||
for i in 0..frame_count {
|
||||
let frame_start = i * channels;
|
||||
|
||||
let mix_sample = if channels >= 2 {
|
||||
// Stereo -> Average L + R
|
||||
(data[frame_start] + data[frame_start + 1]) / 2.0
|
||||
} else {
|
||||
// Mono -> Take as is
|
||||
data[frame_start]
|
||||
};
|
||||
|
||||
vad_input_chunk.push(mix_sample);
|
||||
}
|
||||
|
||||
|
||||
// 3. Resample for VAD
|
||||
self.resample_input_buffer.extend_from_slice(&vad_input_chunk);
|
||||
|
||||
// Handling Resampling properly:
|
||||
let needed = self.resampler.input_frames_next();
|
||||
while self.resample_input_buffer.len() >= needed {
|
||||
let chunk: Vec<f32> = self.resample_input_buffer.drain(0..needed).collect();
|
||||
@@ -127,63 +149,87 @@ impl AudioProcessor {
|
||||
// Update output buffer usage... logic is tricky with drain.
|
||||
}
|
||||
|
||||
// 3. Process VAD
|
||||
// 4. Process VAD
|
||||
while self.vad_buffer.len() >= self.vad_chunk_size {
|
||||
let vad_chunk: Vec<f32> = self.vad_buffer.drain(0..self.vad_chunk_size).collect();
|
||||
// Run Detection
|
||||
// Run Detection
|
||||
let probability = self.vad.predict(vad_chunk.clone());
|
||||
|
||||
// Calculate RMS for this chunk to use as fallback/hybrid detection
|
||||
let sq_sum: f32 = vad_chunk.iter().map(|x| x * x).sum();
|
||||
let rms = (sq_sum / vad_chunk.len() as f32).sqrt();
|
||||
|
||||
// Hybrid VAD: Probability > 0.4 OR RMS > 0.005 (approx -46dB)
|
||||
let is_speech = probability > 0.4 || rms > 0.005;
|
||||
// Hybrid VAD: Probability > 0.9 OR RMS > 0.025
|
||||
// INCREASED THRESHOLDS (v1.1.1):
|
||||
// Reduced sensitivity to avoid background noise triggering recording.
|
||||
let is_speech = probability > 0.9 || rms > 0.025;
|
||||
|
||||
if is_speech {
|
||||
self.is_speech_active = true;
|
||||
self.last_speech_time = self.total_processed_samples;
|
||||
}
|
||||
|
||||
// Emit VAD event periodically (every 500ms)
|
||||
// Emit VAD event periodically (every 500ms is enough for non-diagnostic mode)
|
||||
if self.last_event_time.elapsed().as_millis() > 500 {
|
||||
// Calculate simple RMS of the current chunk for debugging
|
||||
let sq_sum: f32 = vad_chunk.iter().map(|x| x * x).sum();
|
||||
let rms = (sq_sum / vad_chunk.len() as f32).sqrt();
|
||||
|
||||
// Print debug info to stdout (viewable in terminal)
|
||||
println!("VAD Debug: Prob={:.4}, RMS={:.6}, Speech={}", probability, rms, is_speech);
|
||||
|
||||
if let Some(app) = &self.app_handle {
|
||||
// Just sending probability is enough for now
|
||||
#[derive(serde::Serialize, Clone)]
|
||||
#[derive(Clone, serde::Serialize)]
|
||||
struct VadEvent {
|
||||
probability: f32,
|
||||
is_speech: bool,
|
||||
probability: f32,
|
||||
}
|
||||
let _ = app.emit("vad-event", VadEvent { probability, is_speech });
|
||||
let _ = app.emit("vad-event", VadEvent {
|
||||
probability,
|
||||
is_speech: self.is_speech_active,
|
||||
});
|
||||
}
|
||||
self.last_event_time = std::time::Instant::now();
|
||||
|
||||
// IMPORTANT: We reset is_speech_active after emitting,
|
||||
// so we don't latch it forever if the user stops talking.
|
||||
// However, the main loop sets it to true if current chunk is speech.
|
||||
// This logic is a bit of a "latch for X ms".
|
||||
self.is_speech_active = false;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
// 4. Update Hangover and Check Write condition
|
||||
if self.waiting_for_speech {
|
||||
if self.is_speech_active {
|
||||
// Trigger Detected!
|
||||
println!("Auto-Start: Speech detected. Flushing pre-roll...");
|
||||
self.waiting_for_speech = false;
|
||||
|
||||
// Flush Ring Buffer (Orderly: from ring_pos to end, then 0 to ring_pos)
|
||||
let mut guard = self.writer.lock().unwrap();
|
||||
let amplitude = i16::MAX as f32;
|
||||
|
||||
// Part 1: ring_pos to end
|
||||
for i in self.ring_pos..self.ring_size {
|
||||
let sample = self.ring_buffer[i];
|
||||
guard.write_sample((sample * amplitude) as i16).ok();
|
||||
}
|
||||
// Part 2: 0 to ring_pos
|
||||
for i in 0..self.ring_pos {
|
||||
let sample = self.ring_buffer[i];
|
||||
guard.write_sample((sample * amplitude) as i16).ok();
|
||||
}
|
||||
|
||||
// Emit event to notify frontend that "real" recording started
|
||||
if let Some(app) = &self.app_handle {
|
||||
let _ = app.emit("auto-recording-triggered", ());
|
||||
}
|
||||
|
||||
} else {
|
||||
// Still waiting, do not write to file.
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
// Standard Recording Logic (Active or Hangover)
|
||||
let time_since_speech = self.total_processed_samples.saturating_sub(self.last_speech_time);
|
||||
|
||||
if self.is_speech_active || time_since_speech < self.hangover_samples {
|
||||
// We are recording!
|
||||
// Check if we just started (transition)
|
||||
// Ideally we dump the ring buffer here if we just switched state.
|
||||
// Implementing perfect ring buffer dump is complex (need to track state changes better).
|
||||
// MVP: Just Write Current Data if in state.
|
||||
|
||||
// Improvement: If we are in hangover, we just write.
|
||||
// If we just detected speech (was not speech?), dump ring buffer?
|
||||
// We'd need to know if we 'wrote' the ring buffer already.
|
||||
|
||||
// Simple Logic: just write all incoming data if (Now - LastSpeech < Hangover)
|
||||
|
||||
let mut guard = self.writer.lock().unwrap();
|
||||
for &sample in data {
|
||||
let amplitude = i16::MAX as f32;
|
||||
|
||||
@@ -1,9 +1,15 @@
|
||||
use tauri::{AppHandle, Manager, State, Emitter};
|
||||
use tauri::{
|
||||
AppHandle, Manager, State, Emitter,
|
||||
menu::{Menu, MenuItem},
|
||||
tray::{TrayIconBuilder, TrayIconEvent},
|
||||
WindowEvent
|
||||
};
|
||||
use std::sync::{Arc, Mutex};
|
||||
use std::process::Command;
|
||||
use cpal::traits::{DeviceTrait, HostTrait, StreamTrait};
|
||||
use std::time::Duration;
|
||||
use tokio::time::sleep;
|
||||
use base64::Engine;
|
||||
|
||||
mod audio_processor;
|
||||
use audio_processor::AudioProcessor;
|
||||
@@ -65,7 +71,7 @@ fn get_input_devices() -> Result<Vec<AudioDevice>, String> {
|
||||
|
||||
|
||||
#[tauri::command]
|
||||
fn start_recording(app: AppHandle, state: State<'_, AppState>, device_id: String, save_path: Option<String>, custom_filename: Option<String>) -> Result<(), String> {
|
||||
fn start_recording(app: AppHandle, state: State<'_, AppState>, device_id: String, save_path: Option<String>, custom_filename: Option<String>, wait_for_speech: Option<bool>) -> Result<(), String> {
|
||||
emit_log(&app, "INFO", &format!("Starting recording on device: {}", device_id));
|
||||
let host = cpal::default_host();
|
||||
|
||||
@@ -77,15 +83,16 @@ fn start_recording(app: AppHandle, state: State<'_, AppState>, device_id: String
|
||||
.or_else(|| host.default_input_device())
|
||||
.ok_or("No input device found")?;
|
||||
|
||||
let config = device.default_input_config().map_err(|e| e.to_string())?;
|
||||
// Select the configuration with the MAXIMUM number of channels
|
||||
// This is crucial for "Hearbit Audio" (Aggregate) which lists 3 channels but might default to 2.
|
||||
// We want the raw 3 channels to separate Mic (Ch0) from System (Ch1+2).
|
||||
let supported_configs = device.supported_input_configs().map_err(|e| e.to_string())?;
|
||||
let config = supported_configs
|
||||
.max_by_key(|c| c.channels())
|
||||
.map(|c| c.with_max_sample_rate())
|
||||
.ok_or("No supported input configurations found")?;
|
||||
|
||||
// VAD requires 16Hz or 8kHz, typically. Silero likes 16k.
|
||||
// We might need to resample or just check if the device supports it.
|
||||
// For MVP VAD, we will try to stick to standard rates.
|
||||
// Actually, simple energy VAD is easier to start with if Silero is too heavy or requires ONNX runtime.
|
||||
// Let's check the crate docs or usage first.
|
||||
// Wait, the user wants to IGNORE music. Energy VAD will fail on music.
|
||||
// voice_activity_detector crate usually uses Silero or similar.
|
||||
emit_log(&app, "INFO", &format!("Selected Audio Config: {} Channels, {} Hz", config.channels(), config.sample_rate()));
|
||||
|
||||
let spec = hound::WavSpec {
|
||||
channels: config.channels(),
|
||||
@@ -122,7 +129,12 @@ fn start_recording(app: AppHandle, state: State<'_, AppState>, device_id: String
|
||||
|
||||
// Initialize AudioProcessor (VAD)
|
||||
// We pass the writer to it.
|
||||
let processor = AudioProcessor::new(config.sample_rate(), writer.clone(), app.clone())
|
||||
let should_wait = wait_for_speech.unwrap_or(false);
|
||||
if should_wait {
|
||||
emit_log(&app, "INFO", "Recording started in WAITING mode (buffer-only until speech).");
|
||||
}
|
||||
|
||||
let processor = AudioProcessor::new(config.sample_rate(), config.channels(), writer.clone(), app.clone(), should_wait)
|
||||
.map_err(|e| format!("Failed to create AudioProcessor: {}", e))?;
|
||||
|
||||
// Wrap processor in Arc<Mutex> so we can share/move it into callback
|
||||
@@ -560,6 +572,189 @@ async fn summarize_text(app: AppHandle, text: String, api_key: String, product_i
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(serde::Serialize)]
|
||||
struct AudioMetadata {
|
||||
duration: f64,
|
||||
size: u64,
|
||||
format: String,
|
||||
}
|
||||
|
||||
// Helper to find ffmpeg/ffprobe in common paths
|
||||
fn resolve_binary_path(binary_name: &str) -> String {
|
||||
let common_paths = [
|
||||
format!("/opt/homebrew/bin/{}", binary_name),
|
||||
format!("/usr/local/bin/{}", binary_name),
|
||||
format!("/usr/bin/{}", binary_name),
|
||||
];
|
||||
|
||||
for path in common_paths.iter() {
|
||||
if std::path::Path::new(path).exists() {
|
||||
return path.clone();
|
||||
}
|
||||
}
|
||||
|
||||
// Fallback to expecting it in PATH
|
||||
binary_name.to_string()
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
fn get_audio_metadata(app: AppHandle, file_path: String) -> Result<AudioMetadata, String> {
|
||||
emit_log(&app, "INFO", &format!("Getting metadata for: {}", file_path));
|
||||
|
||||
let path = std::path::Path::new(&file_path);
|
||||
if !path.exists() {
|
||||
return Err(format!("File not found: {}", file_path));
|
||||
}
|
||||
|
||||
let size = std::fs::metadata(&file_path)
|
||||
.map_err(|e| e.to_string())?
|
||||
.len();
|
||||
|
||||
// Use ffprobe to get duration
|
||||
// Try resolved path first
|
||||
let ffprobe_cmd = resolve_binary_path("ffprobe");
|
||||
|
||||
let output = Command::new(&ffprobe_cmd)
|
||||
.args([
|
||||
"-v", "error",
|
||||
"-show_entries", "format=duration",
|
||||
"-of", "default=noprint_wrappers=1:nokey=1",
|
||||
&file_path
|
||||
])
|
||||
.output()
|
||||
.map_err(|e| format!("Failed to execute ffprobe at '{}': {}", ffprobe_cmd, e))?;
|
||||
|
||||
let duration_str = String::from_utf8_lossy(&output.stdout);
|
||||
let duration: f64 = duration_str.trim().parse().unwrap_or(0.0);
|
||||
|
||||
// Extension as format
|
||||
let format = path.extension()
|
||||
.and_then(|e| e.to_str())
|
||||
.unwrap_or("unknown")
|
||||
.to_string();
|
||||
|
||||
Ok(AudioMetadata {
|
||||
duration,
|
||||
size,
|
||||
format,
|
||||
})
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
fn convert_to_mp3(app: AppHandle, wav_path: String) -> Result<String, String> {
|
||||
emit_log(&app, "INFO", &format!("Converting to MP3: {}", wav_path));
|
||||
|
||||
let mp3_path = wav_path.replace(".wav", ".mp3");
|
||||
let ffmpeg_cmd = resolve_binary_path("ffmpeg");
|
||||
|
||||
let output = Command::new(&ffmpeg_cmd)
|
||||
.args([
|
||||
"-i", &wav_path,
|
||||
"-codec:a", "libmp3lame",
|
||||
"-b:a", "64k",
|
||||
"-y", // overwrite
|
||||
&mp3_path
|
||||
])
|
||||
.output()
|
||||
.map_err(|e| format!("Failed to execute ffmpeg at '{}': {}", ffmpeg_cmd, e))?;
|
||||
|
||||
if output.status.success() {
|
||||
emit_log(&app, "SUCCESS", &format!("MP3 created: {}", mp3_path));
|
||||
Ok(mp3_path)
|
||||
} else {
|
||||
let error = String::from_utf8_lossy(&output.stderr);
|
||||
emit_log(&app, "ERROR", &format!("MP3 conversion failed: {}", error));
|
||||
Err(format!("MP3 conversion failed: {}", error))
|
||||
}
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
fn chunk_audio(app: AppHandle, file_path: String, chunk_minutes: u32) -> Result<Vec<String>, String> {
|
||||
emit_log(&app, "INFO", &format!("Chunking audio: {} ({}min chunks)", file_path, chunk_minutes));
|
||||
|
||||
let chunk_seconds = chunk_minutes * 60;
|
||||
let ffprobe_cmd = resolve_binary_path("ffprobe");
|
||||
let ffmpeg_cmd = resolve_binary_path("ffmpeg");
|
||||
|
||||
// Get total duration using ffprobe
|
||||
let duration_output = Command::new(&ffprobe_cmd)
|
||||
.args([
|
||||
"-v", "error",
|
||||
"-show_entries", "format=duration",
|
||||
"-of", "default=noprint_wrappers=1:nokey=1",
|
||||
&file_path
|
||||
])
|
||||
.output()
|
||||
.map_err(|e| format!("Failed to get duration with '{}': {}", ffprobe_cmd, e))?;
|
||||
|
||||
let duration_str = String::from_utf8_lossy(&duration_output.stdout);
|
||||
let duration: f64 = duration_str.trim().parse()
|
||||
.map_err(|_| "Failed to parse duration".to_string())?;
|
||||
|
||||
let num_chunks = (duration / chunk_seconds as f64).ceil() as usize;
|
||||
emit_log(&app, "INFO", &format!("Total duration: {}s, creating {} chunks", duration, num_chunks));
|
||||
|
||||
let mut chunk_paths = Vec::new();
|
||||
let base_path = file_path.replace(".mp3", "");
|
||||
|
||||
for i in 0..num_chunks {
|
||||
let start_time = i as u32 * chunk_seconds;
|
||||
let chunk_path = format!("{}_chunk_{}.mp3", base_path, i);
|
||||
|
||||
let output = Command::new(&ffmpeg_cmd)
|
||||
.args([
|
||||
"-i", &file_path,
|
||||
"-ss", &start_time.to_string(),
|
||||
"-t", &chunk_seconds.to_string(),
|
||||
"-c", "copy",
|
||||
"-y",
|
||||
&chunk_path
|
||||
])
|
||||
.output()
|
||||
.map_err(|e| format!("Failed to create chunk {} with '{}': {}", i, ffmpeg_cmd, e))?;
|
||||
|
||||
if !output.status.success() {
|
||||
let error = String::from_utf8_lossy(&output.stderr);
|
||||
return Err(format!("Chunk {} failed: {}", i, error));
|
||||
}
|
||||
|
||||
chunk_paths.push(chunk_path);
|
||||
}
|
||||
|
||||
emit_log(&app, "SUCCESS", &format!("Created {} chunks", chunk_paths.len()));
|
||||
Ok(chunk_paths)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
fn read_image_as_base64(app: AppHandle, file_path: String) -> Result<String, String> {
|
||||
emit_log(&app, "INFO", &format!("Reading image as base64: {}", file_path));
|
||||
|
||||
let bytes = std::fs::read(&file_path)
|
||||
.map_err(|e| format!("Failed to read file: {}", e))?;
|
||||
|
||||
// Detect image type from extension
|
||||
let extension = std::path::Path::new(&file_path)
|
||||
.extension()
|
||||
.and_then(|e| e.to_str())
|
||||
.unwrap_or("png")
|
||||
.to_lowercase();
|
||||
|
||||
let mime_type = match extension.as_str() {
|
||||
"jpg" | "jpeg" => "image/jpeg",
|
||||
"png" => "image/png",
|
||||
"svg" => "image/svg+xml",
|
||||
"gif" => "image/gif",
|
||||
_ => "image/png"
|
||||
};
|
||||
|
||||
// Use base64 encoding
|
||||
let base64_str = base64::prelude::BASE64_STANDARD.encode(&bytes);
|
||||
let data_url = format!("data:{};base64,{}", mime_type, base64_str);
|
||||
|
||||
emit_log(&app, "SUCCESS", &format!("Image converted to base64 ({} bytes)", base64_str.len()));
|
||||
Ok(data_url)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
fn open_audio_midi_setup() -> Result<(), String> {
|
||||
Command::new("open")
|
||||
@@ -640,6 +835,49 @@ async fn read_log_file(app: AppHandle) -> Result<String, String> {
|
||||
#[cfg_attr(mobile, tauri::mobile_entry_point)]
|
||||
pub fn run() {
|
||||
tauri::Builder::default()
|
||||
.setup(|app| {
|
||||
// Setup Tray Icon
|
||||
let quit_i = MenuItem::with_id(app, "quit", "Quit Hearbit AI", true, None::<&str>).unwrap();
|
||||
let show_i = MenuItem::with_id(app, "show", "Show Window", true, None::<&str>).unwrap();
|
||||
let menu = Menu::with_items(app, &[&show_i, &quit_i]).unwrap();
|
||||
|
||||
let _tray = TrayIconBuilder::new()
|
||||
.icon(app.default_window_icon().unwrap().clone())
|
||||
.menu(&menu)
|
||||
.show_menu_on_left_click(true)
|
||||
.on_menu_event(|app, event| {
|
||||
match event.id.as_ref() {
|
||||
"quit" => app.exit(0),
|
||||
"show" => {
|
||||
if let Some(window) = app.get_webview_window("main") {
|
||||
let _ = window.show();
|
||||
let _ = window.set_focus();
|
||||
}
|
||||
}
|
||||
_ => {}
|
||||
}
|
||||
})
|
||||
.on_tray_icon_event(|tray, event| {
|
||||
if let TrayIconEvent::Click { .. } = event {
|
||||
let app = tray.app_handle();
|
||||
if let Some(window) = app.get_webview_window("main") {
|
||||
let _ = window.show();
|
||||
let _ = window.set_focus();
|
||||
}
|
||||
}
|
||||
})
|
||||
.build(app)?;
|
||||
|
||||
Ok(())
|
||||
})
|
||||
.on_window_event(|window, event| {
|
||||
if let WindowEvent::CloseRequested { api, .. } = event {
|
||||
// Prevent window from closing, just hide it
|
||||
window.hide().unwrap();
|
||||
api.prevent_close();
|
||||
}
|
||||
})
|
||||
.plugin(tauri_plugin_shell::init())
|
||||
.plugin(tauri_plugin_log::Builder::default()
|
||||
.targets([
|
||||
tauri_plugin_log::Target::new(tauri_plugin_log::TargetKind::Stdout),
|
||||
@@ -670,6 +908,10 @@ pub fn run() {
|
||||
auth::get_calendar_events,
|
||||
save_text_file,
|
||||
read_log_file,
|
||||
get_audio_metadata,
|
||||
convert_to_mp3,
|
||||
chunk_audio,
|
||||
read_image_as_base64,
|
||||
email::send_smtp_email
|
||||
])
|
||||
.run(tauri::generate_context!())
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
{
|
||||
"$schema": "https://schema.tauri.app/config/2",
|
||||
"productName": "Hearbit AI",
|
||||
"version": "1.1.0",
|
||||
"version": "1.1.1",
|
||||
"identifier": "com.hearbit-ai.desktop",
|
||||
"build": {
|
||||
"beforeDevCommand": "npm run dev",
|
||||
|
||||
42
src/App.tsx
42
src/App.tsx
@@ -7,6 +7,7 @@ import TranscriptionView from "./components/TranscriptionView";
|
||||
import Tabs from "./components/Tabs";
|
||||
import MeetingsView from "./components/MeetingsView";
|
||||
import HistoryView from "./components/HistoryView";
|
||||
import Import from "./components/Import";
|
||||
import ToastContainer, { ToastMessage, ToastType } from "./components/ui/Toast";
|
||||
|
||||
export interface PromptTemplate {
|
||||
@@ -24,8 +25,8 @@ export interface EmailTemplate {
|
||||
}
|
||||
|
||||
function App() {
|
||||
const [view, setView] = useState<'recorder' | 'settings' | 'transcription' | 'meetings' | 'history'>('recorder');
|
||||
const [lastTab, setLastTab] = useState<'recorder' | 'transcription' | 'meetings' | 'history'>('recorder');
|
||||
const [view, setView] = useState<'recorder' | 'settings' | 'transcription' | 'meetings' | 'history' | 'import'>('recorder');
|
||||
const [lastTab, setLastTab] = useState<'recorder' | 'transcription' | 'meetings' | 'history' | 'import'>('recorder');
|
||||
|
||||
|
||||
// Auto-start recording state to handle "Join & Record" transition
|
||||
@@ -311,6 +312,14 @@ Thanks!`
|
||||
}
|
||||
};
|
||||
|
||||
const handleRenameHistory = (id: string, newSubject: string) => {
|
||||
const newHistory = history.map(item =>
|
||||
item.id === id ? { ...item, subject: newSubject } : item
|
||||
);
|
||||
setHistory(newHistory);
|
||||
localStorage.setItem('infomaniak_history', JSON.stringify(newHistory));
|
||||
};
|
||||
|
||||
const handleDeleteHistory = (id: string) => {
|
||||
const newHistory = history.filter(item => item.id !== id);
|
||||
setHistory(newHistory);
|
||||
@@ -343,7 +352,7 @@ Thanks!`
|
||||
</div>
|
||||
|
||||
<Tabs
|
||||
currentTab={view as 'recorder' | 'transcription' | 'meetings' | 'history'}
|
||||
currentTab={view as 'recorder' | 'transcription' | 'meetings' | 'history' | 'import'}
|
||||
onTabChange={(t) => setView(t)}
|
||||
/>
|
||||
</div>
|
||||
@@ -351,7 +360,8 @@ Thanks!`
|
||||
|
||||
<div className="flex-1 flex h-full overflow-hidden relative">
|
||||
<div className="flex-1 flex flex-col h-full overflow-hidden relative">
|
||||
{view === 'recorder' && (
|
||||
{/* Recorder - Persistent (Hidden via CSS to keep recording alive) */}
|
||||
<div className="flex-1 flex flex-col h-full overflow-hidden" style={{ display: view === 'recorder' ? 'flex' : 'none' }}>
|
||||
<Recorder
|
||||
apiKey={apiKey}
|
||||
productId={productId}
|
||||
@@ -377,8 +387,9 @@ Thanks!`
|
||||
addToast={addToast}
|
||||
selectedModel={selectedModel}
|
||||
onModelChange={handleModelChange}
|
||||
isVisible={view === 'recorder'}
|
||||
/>
|
||||
)}
|
||||
</div>
|
||||
|
||||
{view === 'transcription' && (
|
||||
<TranscriptionView
|
||||
@@ -410,6 +421,10 @@ Thanks!`
|
||||
history={history}
|
||||
onLoad={handleLoadHistory}
|
||||
onDelete={handleDeleteHistory}
|
||||
onRename={handleRenameHistory}
|
||||
smtpConfig={smtpConfig}
|
||||
emailTemplates={emailTemplates}
|
||||
addToast={addToast}
|
||||
/>
|
||||
)}
|
||||
|
||||
@@ -429,6 +444,23 @@ Thanks!`
|
||||
/>
|
||||
)}
|
||||
|
||||
{view === 'import' && (
|
||||
<Import
|
||||
apiKey={apiKey}
|
||||
productId={productId}
|
||||
prompts={prompts}
|
||||
selectedModel={selectedModel}
|
||||
onSaveToHistory={handleSaveToHistory}
|
||||
onComplete={() => setView('transcription')}
|
||||
addToast={addToast}
|
||||
setTranscription={setTranscription}
|
||||
setSummary={setSummary}
|
||||
/>
|
||||
)}
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
{view === 'settings' && (
|
||||
|
||||
@@ -64,9 +64,14 @@ const EmailPreviewModal: React.FC<EmailPreviewModalProps> = ({
|
||||
const [activeTab, setActiveTab] = useState<'preview' | 'source'>('preview');
|
||||
|
||||
const generateHtmlBody = (content: string, title: string) => {
|
||||
// Simple heuristic: if it looks like HTML, treat as HTML. Otherwise, markdown.
|
||||
const isHtml = /^\s*<(!DOCTYPE|html|div|p|table)/i.test(content);
|
||||
const formattedBody = isHtml ? content : formatMarkdownToHtml(content);
|
||||
// Check if it's a full HTML document
|
||||
if (/^\s*<!DOCTYPE html/i.test(content) || /^\s*<html/i.test(content)) {
|
||||
return content;
|
||||
}
|
||||
|
||||
// Simple heuristic: if it looks like HTML fragment (div, p, table), treat as HTML. Otherwise, markdown.
|
||||
const isHtmlFragment = /^\s*<(div|p|table|section|header|footer)/i.test(content);
|
||||
const formattedBody = isHtmlFragment ? content : formatMarkdownToHtml(content);
|
||||
|
||||
return `
|
||||
<!DOCTYPE html>
|
||||
@@ -111,14 +116,17 @@ const EmailPreviewModal: React.FC<EmailPreviewModalProps> = ({
|
||||
// Replace placeholders
|
||||
const dateStr = new Date().toLocaleDateString();
|
||||
let newSub = tmpl.subject.replace(/{{date}}/g, dateStr).replace(/{{subject}}/g, "Meeting");
|
||||
// Note: We don't have the original 'recordingSubject' here easily without more prop drilling,
|
||||
// so we default to "Meeting" or user can edit.
|
||||
// Actually, initialSubject usually contains "Meeting Summary", so we could parse it, but for now date/summary is most important.
|
||||
|
||||
// Clean up JSON if necessary (e.g. remove markdown code blocks ```json ... ```)
|
||||
let cleanSummary = initialBody;
|
||||
if (initialBody.trim().startsWith('```')) {
|
||||
cleanSummary = initialBody.replace(/^```(json)?/i, '').replace(/```$/, '').trim();
|
||||
}
|
||||
|
||||
let newBody = tmpl.body
|
||||
.replace(/{{date}}/g, dateStr)
|
||||
.replace(/{{subject}}/g, "the meeting")
|
||||
.replace(/{{summary}}/g, initialBody);
|
||||
.replace(/{{summary}}/g, cleanSummary);
|
||||
|
||||
setSubject(newSub);
|
||||
setBody(generateHtmlBody(newBody, newSub));
|
||||
@@ -242,7 +250,7 @@ const EmailPreviewModal: React.FC<EmailPreviewModalProps> = ({
|
||||
srcDoc={body}
|
||||
className="w-full h-full border-none"
|
||||
title="Email Preview"
|
||||
sandbox="allow-same-origin"
|
||||
sandbox="allow-same-origin allow-scripts"
|
||||
/>
|
||||
</div>
|
||||
) : (
|
||||
|
||||
@@ -1,4 +1,9 @@
|
||||
import { FileText, Trash2, Calendar } from 'lucide-react';
|
||||
import { FileText, Trash2, Calendar, Pencil, Check, X, Mail } from 'lucide-react';
|
||||
import { useState } from 'react';
|
||||
import EmailPreviewModal from './EmailPreviewModal';
|
||||
import { SmtpConfig } from './Settings';
|
||||
import { EmailTemplate } from '../App';
|
||||
import { ToastType } from './ui/Toast';
|
||||
|
||||
interface HistoryItem {
|
||||
id: string;
|
||||
@@ -13,9 +18,34 @@ interface HistoryViewProps {
|
||||
history: HistoryItem[];
|
||||
onLoad: (item: HistoryItem) => void;
|
||||
onDelete: (id: string) => void;
|
||||
onRename: (id: string, newSubject: string) => void;
|
||||
smtpConfig: SmtpConfig;
|
||||
emailTemplates: EmailTemplate[];
|
||||
addToast: (message: string, type: ToastType, duration?: number) => void;
|
||||
}
|
||||
|
||||
export default function HistoryView({ history, onLoad, onDelete }: HistoryViewProps) {
|
||||
export default function HistoryView({ history, onLoad, onDelete, onRename, smtpConfig, emailTemplates, addToast }: HistoryViewProps) {
|
||||
const [editingId, setEditingId] = useState<string | null>(null);
|
||||
const [editValue, setEditValue] = useState("");
|
||||
const [emailModalItem, setEmailModalItem] = useState<HistoryItem | null>(null);
|
||||
|
||||
const startEditing = (item: HistoryItem) => {
|
||||
setEditingId(item.id);
|
||||
setEditValue(item.subject || "Untitled Recording");
|
||||
};
|
||||
|
||||
const saveEdit = () => {
|
||||
if (editingId && editValue.trim()) {
|
||||
onRename(editingId, editValue.trim());
|
||||
setEditingId(null);
|
||||
}
|
||||
};
|
||||
|
||||
const cancelEdit = () => {
|
||||
setEditingId(null);
|
||||
setEditValue("");
|
||||
};
|
||||
|
||||
return (
|
||||
<div className="flex flex-col w-full h-full bg-background p-6">
|
||||
<h1 className="text-2xl font-bold mb-6 flex items-center gap-2">
|
||||
@@ -33,23 +63,63 @@ export default function HistoryView({ history, onLoad, onDelete }: HistoryViewPr
|
||||
{history.map(item => (
|
||||
<div key={item.id} className="bg-card border border-border rounded-xl p-4 hover:shadow-md transition-all group">
|
||||
<div className="flex justify-between items-start">
|
||||
<div className="flex-1">
|
||||
{editingId === item.id ? (
|
||||
<div className="flex items-center gap-2 mb-2" onClick={(e) => e.stopPropagation()}>
|
||||
<input
|
||||
autoFocus
|
||||
type="text"
|
||||
className="flex-1 bg-background border border-input rounded px-2 py-1 text-sm font-semibold focus:outline-none focus:ring-1 focus:ring-ring"
|
||||
value={editValue}
|
||||
onChange={(e) => setEditValue(e.target.value)}
|
||||
onKeyDown={(e) => {
|
||||
if (e.key === 'Enter') saveEdit();
|
||||
if (e.key === 'Escape') cancelEdit();
|
||||
}}
|
||||
/>
|
||||
<button onClick={saveEdit} className="p-1 text-green-500 hover:bg-green-500/10 rounded">
|
||||
<Check size={16} />
|
||||
</button>
|
||||
<button onClick={cancelEdit} className="p-1 text-muted-foreground hover:bg-muted rounded">
|
||||
<X size={16} />
|
||||
</button>
|
||||
</div>
|
||||
) : (
|
||||
<div
|
||||
className="flex-1 cursor-pointer"
|
||||
className="cursor-pointer"
|
||||
onClick={() => onLoad(item)}
|
||||
>
|
||||
<h3 className="text-lg font-semibold group-hover:text-primary transition-colors mb-1">
|
||||
<h3 className="text-lg font-semibold group-hover:text-primary transition-colors mb-1 flex items-center gap-2">
|
||||
{item.subject || "Untitled Recording"}
|
||||
<button
|
||||
onClick={(e) => { e.stopPropagation(); startEditing(item); }}
|
||||
className="opacity-0 group-hover:opacity-100 text-muted-foreground hover:text-foreground p-1 rounded hover:bg-muted transition-all"
|
||||
title="Rename"
|
||||
>
|
||||
<Pencil size={14} />
|
||||
</button>
|
||||
</h3>
|
||||
<div className="flex items-center gap-2 text-xs text-muted-foreground mb-2">
|
||||
</div>
|
||||
)}
|
||||
|
||||
<div className="flex items-center gap-2 text-xs text-muted-foreground mb-2" onClick={() => !editingId && onLoad(item)}>
|
||||
<Calendar size={12} />
|
||||
{item.date}
|
||||
{item.filename && <span className="bg-secondary px-1.5 py-0.5 rounded text-[10px] font-mono">{item.filename}</span>}
|
||||
</div>
|
||||
<p className="text-sm text-foreground/70 line-clamp-2">
|
||||
<p className="text-sm text-foreground/70 line-clamp-2 cursor-pointer" onClick={() => !editingId && onLoad(item)}>
|
||||
{item.summary ? item.summary.substring(0, 150) + "..." : "No summary available."}
|
||||
</p>
|
||||
</div>
|
||||
|
||||
<div className="flex items-center gap-2">
|
||||
<button
|
||||
onClick={(e) => { e.stopPropagation(); setEmailModalItem(item); }}
|
||||
className="text-muted-foreground hover:text-primary p-2 rounded-lg hover:bg-primary/10 transition-colors opacity-0 group-hover:opacity-100"
|
||||
title="Send Email"
|
||||
>
|
||||
<Mail size={18} />
|
||||
</button>
|
||||
<button
|
||||
onClick={(e) => { e.stopPropagation(); onDelete(item.id); }}
|
||||
className="text-muted-foreground hover:text-destructive p-2 rounded-lg hover:bg-destructive/10 transition-colors opacity-0 group-hover:opacity-100"
|
||||
@@ -59,9 +129,21 @@ export default function HistoryView({ history, onLoad, onDelete }: HistoryViewPr
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
))}
|
||||
</div>
|
||||
)}
|
||||
|
||||
<EmailPreviewModal
|
||||
isOpen={emailModalItem !== null}
|
||||
onClose={() => setEmailModalItem(null)}
|
||||
initialRecipients={[]}
|
||||
initialSubject={emailModalItem?.subject || "Meeting Summary"}
|
||||
initialBody={emailModalItem?.summary || ""}
|
||||
emailTemplates={emailTemplates}
|
||||
smtpConfig={smtpConfig ? { ...smtpConfig, port: Number(smtpConfig.port) } : null}
|
||||
addToast={addToast}
|
||||
/>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
411
src/components/Import.tsx
Normal file
411
src/components/Import.tsx
Normal file
@@ -0,0 +1,411 @@
|
||||
import React, { useState } from 'react';
|
||||
import { Upload, FileAudio, X, Check, Loader2 } from 'lucide-react';
|
||||
import { invoke } from "@tauri-apps/api/core";
|
||||
import { open } from '@tauri-apps/plugin-dialog';
|
||||
import logo from '../assets/logo.png';
|
||||
|
||||
interface PromptTemplate {
|
||||
id: string;
|
||||
name: string;
|
||||
content: string;
|
||||
keywords?: string[];
|
||||
}
|
||||
|
||||
interface ImportProps {
|
||||
apiKey: string;
|
||||
productId: string;
|
||||
prompts: PromptTemplate[];
|
||||
selectedModel: string;
|
||||
onSaveToHistory: (transcription: string, summary: string) => void;
|
||||
onComplete: () => void; // Navigate to Transcription view
|
||||
addToast: (msg: string, type: 'success' | 'error' | 'info', duration?: number) => void;
|
||||
setTranscription: (text: string) => void;
|
||||
setSummary: (text: string) => void;
|
||||
}
|
||||
|
||||
interface AudioMetadata {
|
||||
duration: number;
|
||||
size: number;
|
||||
format: string;
|
||||
}
|
||||
|
||||
type ProcessingStage = 'idle' | 'validating' | 'transcribing' | 'summarizing' | 'complete';
|
||||
|
||||
const SUPPORTED_FORMATS = ['mp3', 'mp4', 'm4a', 'wav', 'flac', 'ogg', 'aac', 'wma'];
|
||||
|
||||
const Import: React.FC<ImportProps> = ({
|
||||
apiKey,
|
||||
productId,
|
||||
prompts,
|
||||
selectedModel,
|
||||
onSaveToHistory,
|
||||
onComplete,
|
||||
addToast,
|
||||
setTranscription,
|
||||
setSummary
|
||||
}) => {
|
||||
const [selectedFile, setSelectedFile] = useState<string | null>(null);
|
||||
const [metadata, setMetadata] = useState<AudioMetadata | null>(null);
|
||||
const [meetingTitle, setMeetingTitle] = useState('');
|
||||
const [stage, setStage] = useState<ProcessingStage>('idle');
|
||||
const [selectedPromptId, setSelectedPromptId] = useState<string>('');
|
||||
|
||||
// Set default prompt
|
||||
React.useEffect(() => {
|
||||
if (prompts.length > 0 && !selectedPromptId) {
|
||||
setSelectedPromptId(prompts[0].id);
|
||||
}
|
||||
}, [prompts, selectedPromptId]);
|
||||
|
||||
const validateFile = (filePath: string): boolean => {
|
||||
const extension = filePath.split('.').pop()?.toLowerCase();
|
||||
if (!extension || !SUPPORTED_FORMATS.includes(extension)) {
|
||||
addToast(`Unsupported format. Supported: ${SUPPORTED_FORMATS.join(', ').toUpperCase()}`, 'error', 5000);
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
};
|
||||
|
||||
const extractFilename = (path: string): string => {
|
||||
const parts = path.split(/[/\\]/);
|
||||
const filename = parts[parts.length - 1];
|
||||
return filename.replace(/\.[^/.]+$/, ''); // Remove extension
|
||||
};
|
||||
|
||||
const formatDuration = (seconds: number): string => {
|
||||
const mins = Math.floor(seconds / 60);
|
||||
const secs = Math.floor(seconds % 60);
|
||||
return `${mins}:${secs.toString().padStart(2, '0')}`;
|
||||
};
|
||||
|
||||
const formatSize = (bytes: number): string => {
|
||||
if (bytes < 1024 * 1024) {
|
||||
return `${(bytes / 1024).toFixed(1)} KB`;
|
||||
}
|
||||
return `${(bytes / (1024 * 1024)).toFixed(1)} MB`;
|
||||
};
|
||||
|
||||
const handleFileSelect = async (filePath: string) => {
|
||||
if (!validateFile(filePath)) return;
|
||||
|
||||
setStage('validating');
|
||||
setSelectedFile(filePath);
|
||||
setMeetingTitle(extractFilename(filePath));
|
||||
|
||||
try {
|
||||
const meta = await invoke<AudioMetadata>('get_audio_metadata', { filePath });
|
||||
setMetadata(meta);
|
||||
setStage('idle');
|
||||
addToast('File loaded successfully', 'success', 2000);
|
||||
} catch (e) {
|
||||
console.error('Metadata error:', e);
|
||||
setMetadata(null);
|
||||
setStage('idle');
|
||||
}
|
||||
};
|
||||
|
||||
const handleManualSelect = async () => {
|
||||
try {
|
||||
const selected = await open({
|
||||
multiple: false,
|
||||
filters: [{
|
||||
name: 'Audio/Video',
|
||||
extensions: SUPPORTED_FORMATS
|
||||
}]
|
||||
});
|
||||
|
||||
if (selected && typeof selected === 'string') {
|
||||
handleFileSelect(selected);
|
||||
}
|
||||
} catch (e) {
|
||||
console.error('File picker error:', e);
|
||||
addToast('Failed to open file picker', 'error');
|
||||
}
|
||||
};
|
||||
|
||||
const handleProcess = async () => {
|
||||
if (!selectedFile) return;
|
||||
if (!apiKey || !productId) {
|
||||
addToast('Please configure API key in Settings', 'error');
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
// Check file extension
|
||||
const isWav = selectedFile.toLowerCase().endsWith('.wav');
|
||||
let processFile = selectedFile;
|
||||
|
||||
// Convert WAV to MP3 if needed
|
||||
if (isWav) {
|
||||
setStage('validating');
|
||||
addToast('Converting WAV to MP3...', 'info', 2000);
|
||||
processFile = await invoke<string>('convert_to_mp3', { wavPath: selectedFile });
|
||||
}
|
||||
|
||||
// Get file size to check if chunking needed
|
||||
const metadata = await invoke<AudioMetadata>('get_audio_metadata', { filePath: processFile });
|
||||
const sizeMB = metadata.size / (1024 * 1024);
|
||||
|
||||
let transText = '';
|
||||
|
||||
// Check if chunking needed for large files
|
||||
if (sizeMB >= 18) {
|
||||
// CHUNKING PATH for large files
|
||||
setStage('validating');
|
||||
addToast(`Large file (${sizeMB.toFixed(1)}MB). Splitting into chunks...`, 'info', 4000);
|
||||
|
||||
const chunks = await invoke<string[]>('chunk_audio', {
|
||||
filePath: processFile,
|
||||
chunkMinutes: 10
|
||||
});
|
||||
|
||||
addToast(`Processing ${chunks.length} chunks...`, 'info', 4000);
|
||||
|
||||
let allTranscriptions: string[] = [];
|
||||
|
||||
for (let i = 0; i < chunks.length; i++) {
|
||||
setStage('transcribing');
|
||||
addToast(`Transcribing chunk ${i + 1}/${chunks.length}...`, 'info', 2000);
|
||||
const chunkText = await invoke<string>('transcribe_audio', {
|
||||
filePath: chunks[i],
|
||||
apiKey,
|
||||
productId
|
||||
});
|
||||
allTranscriptions.push(chunkText);
|
||||
}
|
||||
|
||||
// Merge transcriptions
|
||||
transText = allTranscriptions.join('\n\n--- Next Segment ---\n\n');
|
||||
addToast('All chunks transcribed successfully!', 'success', 3000);
|
||||
} else {
|
||||
// NORMAL PATH for small files
|
||||
setStage('transcribing');
|
||||
transText = await invoke<string>('transcribe_audio', {
|
||||
filePath: processFile,
|
||||
apiKey,
|
||||
productId
|
||||
});
|
||||
}
|
||||
|
||||
setTranscription(transText);
|
||||
|
||||
if (!transText || transText.trim().length === 0) {
|
||||
addToast('No speech detected in file', 'error');
|
||||
setStage('idle');
|
||||
return;
|
||||
}
|
||||
|
||||
// Smart prompt selection (copied from Recorder.tsx)
|
||||
let activePrompt = prompts.find(p => p.id === selectedPromptId);
|
||||
const lowerText = transText.toLowerCase();
|
||||
let bestMatchId = selectedPromptId;
|
||||
let maxMatches = 0;
|
||||
|
||||
for (const p of prompts) {
|
||||
if (!p.keywords) continue;
|
||||
let matches = 0;
|
||||
for (const kw of p.keywords) {
|
||||
if (lowerText.includes(kw.toLowerCase())) {
|
||||
matches++;
|
||||
}
|
||||
}
|
||||
if (matches > maxMatches) {
|
||||
maxMatches = matches;
|
||||
bestMatchId = p.id;
|
||||
}
|
||||
}
|
||||
|
||||
if (bestMatchId !== selectedPromptId) {
|
||||
const newPrompt = prompts.find(p => p.id === bestMatchId);
|
||||
if (newPrompt) {
|
||||
addToast(`Smart Select: Switched to "${newPrompt.name}"`, 'info', 3000);
|
||||
activePrompt = newPrompt;
|
||||
}
|
||||
}
|
||||
|
||||
const promptContent = activePrompt ? activePrompt.content : "Summarize this.";
|
||||
|
||||
setStage('summarizing');
|
||||
const sumText = await invoke<string>('summarize_text', {
|
||||
text: transText,
|
||||
apiKey,
|
||||
productId,
|
||||
prompt: promptContent,
|
||||
model: selectedModel
|
||||
});
|
||||
setSummary(sumText);
|
||||
|
||||
// Save to history
|
||||
onSaveToHistory(transText, sumText);
|
||||
|
||||
setStage('complete');
|
||||
addToast('Import complete!', 'success', 3000);
|
||||
|
||||
// Navigate to Transcription view
|
||||
setTimeout(() => {
|
||||
onComplete();
|
||||
}, 1000);
|
||||
|
||||
} catch (e) {
|
||||
console.error('Processing error:', e);
|
||||
addToast(`Error: ${e}`, 'error');
|
||||
setStage('idle');
|
||||
}
|
||||
};
|
||||
|
||||
const handleReset = () => {
|
||||
setSelectedFile(null);
|
||||
setMetadata(null);
|
||||
setMeetingTitle('');
|
||||
setStage('idle');
|
||||
};
|
||||
|
||||
const getStageInfo = () => {
|
||||
switch (stage) {
|
||||
case 'validating': return { icon: Loader2, text: 'Validating file...', color: 'text-blue-500' };
|
||||
case 'transcribing': return { icon: Loader2, text: 'Transcribing audio...', color: 'text-purple-500' };
|
||||
case 'summarizing': return { icon: Loader2, text: 'Generating summary...', color: 'text-green-500' };
|
||||
case 'complete': return { icon: Check, text: 'Complete!', color: 'text-green-500' };
|
||||
default: return null;
|
||||
}
|
||||
};
|
||||
|
||||
const stageInfo = getStageInfo();
|
||||
const isProcessing = stage !== 'idle' && stage !== 'complete';
|
||||
|
||||
return (
|
||||
<div className="flex flex-col w-full h-full bg-background relative">
|
||||
{/* Header */}
|
||||
<div className="w-full flex justify-center items-center p-4 shrink-0">
|
||||
<img src={localStorage.getItem('customLogo') || logo} alt="Logo" className="h-10 object-contain" />
|
||||
</div>
|
||||
|
||||
{/* Main Content */}
|
||||
<div className="flex-1 overflow-y-auto px-6 pb-6 flex flex-col items-center">
|
||||
<h1 className="text-xl font-bold mb-2 text-foreground">Import Audio File</h1>
|
||||
<p className="text-muted-foreground mb-6 text-center text-sm">
|
||||
Select an audio file for transcription and summarization
|
||||
</p>
|
||||
|
||||
{/* File Selection Zone */}
|
||||
<div
|
||||
className={`w-full max-w-md border-2 border-dashed rounded-lg p-8 mb-6 transition-all ${selectedFile
|
||||
? 'border-green-500 bg-green-500/5'
|
||||
: 'border-border bg-secondary/30'
|
||||
}`}
|
||||
>
|
||||
<div className="flex flex-col items-center justify-center gap-4">
|
||||
{selectedFile ? (
|
||||
<>
|
||||
<FileAudio size={48} className="text-green-500" />
|
||||
<div className="text-center">
|
||||
<p className="font-semibold text-foreground">{meetingTitle}</p>
|
||||
{metadata && (
|
||||
<p className="text-xs text-muted-foreground mt-1">
|
||||
{formatDuration(metadata.duration)} • {formatSize(metadata.size)} • {metadata.format.toUpperCase()}
|
||||
</p>
|
||||
)}
|
||||
</div>
|
||||
<button
|
||||
onClick={handleReset}
|
||||
className="text-xs text-muted-foreground hover:text-foreground flex items-center gap-1"
|
||||
>
|
||||
<X size={14} /> Change file
|
||||
</button>
|
||||
</>
|
||||
) : (
|
||||
<>
|
||||
<Upload size={48} className="text-muted-foreground" />
|
||||
<div className="text-center">
|
||||
<p className="font-semibold text-foreground">Select Audio File</p>
|
||||
<p className="text-xs text-muted-foreground mt-1">
|
||||
Click below to browse your files
|
||||
</p>
|
||||
</div>
|
||||
<button
|
||||
onClick={handleManualSelect}
|
||||
disabled={isProcessing}
|
||||
className="px-6 py-3 bg-primary text-primary-foreground rounded-lg hover:bg-primary/90 disabled:opacity-50 text-base font-semibold transition-all shadow-md hover:shadow-lg"
|
||||
>
|
||||
Browse Files
|
||||
</button>
|
||||
<p className="text-xs text-muted-foreground">
|
||||
Supported: MP3, MP4, WAV, M4A, FLAC, OGG, AAC, WMA
|
||||
</p>
|
||||
</>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Configuration Section */}
|
||||
{selectedFile && (
|
||||
<div className="w-full max-w-md space-y-4">
|
||||
{/* Meeting Title */}
|
||||
<div>
|
||||
<label className="text-xs font-semibold text-muted-foreground uppercase tracking-wider block mb-1">
|
||||
Meeting Title
|
||||
</label>
|
||||
<input
|
||||
type="text"
|
||||
value={meetingTitle}
|
||||
onChange={(e) => setMeetingTitle(e.target.value)}
|
||||
disabled={isProcessing}
|
||||
className="w-full p-2 text-sm bg-secondary rounded border border-border outline-none focus:ring-2 focus:ring-primary disabled:opacity-50"
|
||||
placeholder="Enter meeting title..."
|
||||
/>
|
||||
</div>
|
||||
|
||||
{/* AI Template */}
|
||||
<div>
|
||||
<label className="text-xs font-semibold text-muted-foreground uppercase tracking-wider block mb-1">
|
||||
AI Template
|
||||
</label>
|
||||
<select
|
||||
value={selectedPromptId}
|
||||
onChange={(e) => setSelectedPromptId(e.target.value)}
|
||||
disabled={isProcessing || prompts.length === 0}
|
||||
className="w-full p-2 text-sm bg-secondary rounded border border-border outline-none focus:ring-2 focus:ring-primary disabled:opacity-50"
|
||||
>
|
||||
{prompts.map(p => (
|
||||
<option key={p.id} value={p.id}>{p.name}</option>
|
||||
))}
|
||||
{prompts.length === 0 && <option value="">No templates</option>}
|
||||
</select>
|
||||
</div>
|
||||
|
||||
{/* Process Button */}
|
||||
<button
|
||||
onClick={handleProcess}
|
||||
disabled={!selectedFile || isProcessing || !apiKey}
|
||||
className="w-full py-3 text-base font-semibold bg-primary text-primary-foreground rounded-lg hover:bg-primary/90 disabled:opacity-50 disabled:cursor-not-allowed transition-all shadow-md hover:shadow-lg flex items-center justify-center gap-2"
|
||||
>
|
||||
{isProcessing ? (
|
||||
<>
|
||||
<Loader2 size={20} className="animate-spin" />
|
||||
Processing...
|
||||
</>
|
||||
) : (
|
||||
<>
|
||||
<Upload size={20} />
|
||||
Transcribe & Summarize
|
||||
</>
|
||||
)}
|
||||
</button>
|
||||
|
||||
{/* Progress Indicator */}
|
||||
{stageInfo && (
|
||||
<div className="flex items-center justify-center gap-2 p-3 bg-secondary/50 rounded-lg border border-border">
|
||||
<stageInfo.icon size={16} className={`${stageInfo.color} ${stage !== 'complete' ? 'animate-spin' : ''}`} />
|
||||
<span className={`text-sm font-medium ${stageInfo.color}`}>
|
||||
{stageInfo.text}
|
||||
</span>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
};
|
||||
|
||||
export default Import;
|
||||
@@ -1,4 +1,4 @@
|
||||
import React, { useState, useEffect } from 'react';
|
||||
import React, { useState, useEffect, useRef } from 'react';
|
||||
import { Mic, Square, Users, Headphones } from 'lucide-react';
|
||||
import { invoke } from "@tauri-apps/api/core";
|
||||
import { listen } from '@tauri-apps/api/event';
|
||||
@@ -42,6 +42,7 @@ interface RecorderProps {
|
||||
addToast: (msg: string, type: 'success' | 'error' | 'info', duration?: number) => void;
|
||||
selectedModel: string;
|
||||
onModelChange: (model: string) => void;
|
||||
isVisible: boolean;
|
||||
}
|
||||
|
||||
interface AudioDevice {
|
||||
@@ -58,6 +59,10 @@ const Recorder: React.FC<RecorderProps> = ({
|
||||
const [isRecording, setIsRecording] = useState(false);
|
||||
const [isStopping, setIsStopping] = useState(false); // New lock state
|
||||
const [isPaused, setIsPaused] = useState(false);
|
||||
const [isWaiting, setIsWaiting] = useState(false); // New state for Auto-Start
|
||||
const [autoStartEnabled, setAutoStartEnabled] = useState(false); // Toggle state
|
||||
|
||||
|
||||
const [status, setStatus] = useState<string>('Ready to record');
|
||||
const [selectedDevice, setSelectedDevice] = useState<string>('');
|
||||
const [selectedPromptId, setSelectedPromptId] = useState<string>('');
|
||||
@@ -149,19 +154,33 @@ const Recorder: React.FC<RecorderProps> = ({
|
||||
|
||||
const startRecording = async (deviceIdOverride?: string) => {
|
||||
try {
|
||||
setStatus('Starting...');
|
||||
setStatus('Starting...');
|
||||
// Check override or state
|
||||
const targetDeviceId = deviceIdOverride || selectedDevice;
|
||||
|
||||
// Pass customFilename (camelCase key maps to snake_case in Rust automatically or we need to check Tauri mapping, usually it maps camel to camel? Rust expects snake. Let's use snake_case in invoke args to be safe)
|
||||
await invoke('start_recording', { deviceId: targetDeviceId, savePath: savePath || null, customFilename: props.recordingSubject || null });
|
||||
await invoke('start_recording', {
|
||||
deviceId: targetDeviceId,
|
||||
savePath: savePath || null,
|
||||
customFilename: props.recordingSubject || null,
|
||||
waitForSpeech: autoStartEnabled // Pass the toggle state
|
||||
});
|
||||
|
||||
setIsRecording(true);
|
||||
setIsPaused(false);
|
||||
setTranscription('');
|
||||
setSummary('');
|
||||
|
||||
if (autoStartEnabled) {
|
||||
setIsWaiting(true);
|
||||
setStatus('Waiting for audio...');
|
||||
addToast('Standing by for audio...', 'info', 3000);
|
||||
} else {
|
||||
setIsWaiting(false);
|
||||
setStatus('Recording...');
|
||||
addToast('Recording started', 'success', 2000);
|
||||
}
|
||||
|
||||
} catch (e) {
|
||||
console.error(e);
|
||||
setStatus(`Error: ${e}`);
|
||||
@@ -170,43 +189,91 @@ const Recorder: React.FC<RecorderProps> = ({
|
||||
}
|
||||
};
|
||||
|
||||
// VAD & Auto-Stop Logic
|
||||
useEffect(() => {
|
||||
let unlisten: () => void;
|
||||
// Refs for interval access to avoid dependency cycles
|
||||
const lastSpeechTimeRef = useRef<number>(Date.now());
|
||||
const isStoppingRef = useRef(false);
|
||||
|
||||
const setupListener = async () => {
|
||||
unlisten = await listen<{ is_speech: boolean, probability: number }>('vad-event', (event) => {
|
||||
// Update refs when state changes
|
||||
useEffect(() => {
|
||||
lastSpeechTimeRef.current = lastSpeechTime;
|
||||
}, [lastSpeechTime]);
|
||||
|
||||
useEffect(() => {
|
||||
isStoppingRef.current = isStopping;
|
||||
}, [isStopping]);
|
||||
|
||||
// 1. Event Listeners Effect (Run ONCE when recording starts)
|
||||
useEffect(() => {
|
||||
let unlistenVAD: () => void;
|
||||
let unlistenTrigger: () => void;
|
||||
|
||||
const setupListeners = async () => {
|
||||
if (!isRecording) return;
|
||||
|
||||
console.log("Setting up VAD listeners...");
|
||||
// VAD Event Listener
|
||||
unlistenVAD = await listen<{ is_speech: boolean, probability: number }>('vad-event', (event) => {
|
||||
if (event.payload.is_speech) {
|
||||
setLastSpeechTime(Date.now());
|
||||
lastSpeechTimeRef.current = Date.now(); // Update ref immediately
|
||||
setSilenceDuration(0);
|
||||
}
|
||||
});
|
||||
|
||||
// Auto-Start Trigger Listener
|
||||
unlistenTrigger = await listen('auto-recording-triggered', () => {
|
||||
console.log("Auto-Start Triggered from Backend!");
|
||||
// Only trigger if we are actually waiting
|
||||
setIsWaiting((prev) => {
|
||||
if (prev) {
|
||||
addToast("Audio detected! Recording started.", 'success', 4000);
|
||||
return false;
|
||||
}
|
||||
return prev;
|
||||
});
|
||||
setStatus('Recording (Auto-Started)...');
|
||||
setLastSpeechTime(Date.now());
|
||||
});
|
||||
};
|
||||
|
||||
if (isRecording && !isPaused) {
|
||||
setupListener();
|
||||
setLastSpeechTime(Date.now()); // Reset on start
|
||||
if (isRecording) {
|
||||
setupListeners();
|
||||
}
|
||||
|
||||
return () => {
|
||||
// Cleanup listeners
|
||||
if (unlistenVAD) unlistenVAD();
|
||||
if (unlistenTrigger) unlistenTrigger();
|
||||
};
|
||||
}, [isRecording, addToast]); // Dependencies for listener setup
|
||||
|
||||
// Ref for visibility to avoid closure staleness in interval
|
||||
const isVisibleRef = useRef(props.isVisible);
|
||||
useEffect(() => {
|
||||
isVisibleRef.current = props.isVisible;
|
||||
}, [props.isVisible]);
|
||||
|
||||
// Auto-Stop Interval Effect
|
||||
useEffect(() => {
|
||||
if (!isRecording || isPaused || isWaiting) return;
|
||||
|
||||
const interval = setInterval(() => {
|
||||
if (isRecording && !isPaused) {
|
||||
const diff = (Date.now() - lastSpeechTime) / 1000;
|
||||
setSilenceDuration(diff);
|
||||
const now = Date.now();
|
||||
const timeSinceSpeech = (now - lastSpeechTimeRef.current) / 1000;
|
||||
setSilenceDuration(timeSinceSpeech);
|
||||
|
||||
// Auto-stop after 30 seconds of silence
|
||||
if (diff > 30 && !isStopping) { // Check lock
|
||||
console.log("Auto-stopping due to silence");
|
||||
addToast("Auto-stopping (Silence detected)", "info", 3000);
|
||||
// AUTO STOP Logic
|
||||
// Use Ref to get LATEST visibility instantly
|
||||
if (isVisibleRef.current && timeSinceSpeech > 20 && !isStoppingRef.current) {
|
||||
console.log("Auto-stopping due to silence...");
|
||||
isStoppingRef.current = true;
|
||||
addToast('Auto-stopped due to silence', 'info');
|
||||
stopRecording();
|
||||
}
|
||||
}
|
||||
}, 1000);
|
||||
|
||||
return () => {
|
||||
if (unlisten) unlisten();
|
||||
clearInterval(interval);
|
||||
};
|
||||
}, [isRecording, isPaused, lastSpeechTime]);
|
||||
return () => clearInterval(interval);
|
||||
}, [isRecording, isPaused, isWaiting, recordingMode, addToast]); // Removed props.isVisible dependency (using Ref)
|
||||
|
||||
// Handle Auto Start Prop
|
||||
useEffect(() => {
|
||||
@@ -273,18 +340,66 @@ const Recorder: React.FC<RecorderProps> = ({
|
||||
try {
|
||||
setIsRecording(false);
|
||||
setIsPaused(false);
|
||||
setStatus('Processing...');
|
||||
setIsWaiting(false); // Reset waiting state
|
||||
setStatus('Saving recording...');
|
||||
const filePath = await invoke<string>('stop_recording');
|
||||
|
||||
// Wait a moment for file flush (safety)
|
||||
await new Promise(r => setTimeout(r, 500));
|
||||
|
||||
setStatus('Transcribing (Infomaniak Whisper)...');
|
||||
const transText = await invoke<string>('transcribe_audio', {
|
||||
filePath,
|
||||
// Confirm recording saved
|
||||
addToast(`Recording saved locally: ${filePath.split('/').pop()}`, 'success', 3000);
|
||||
setStatus('Converting to MP3...');
|
||||
|
||||
// Small delay to show the "saved" message
|
||||
await new Promise(r => setTimeout(r, 500));
|
||||
|
||||
// Convert WAV to MP3 for smaller size
|
||||
const mp3Path = await invoke<string>('convert_to_mp3', { wavPath: filePath });
|
||||
|
||||
// Get file size to check if chunking needed
|
||||
interface AudioMetadata { duration: number; size: number; format: string; }
|
||||
const metadata = await invoke<AudioMetadata>('get_audio_metadata', { filePath: mp3Path });
|
||||
const sizeMB = metadata.size / (1024 * 1024);
|
||||
|
||||
let transText = '';
|
||||
|
||||
// Check if chunking needed (only for Meeting mode and large files)
|
||||
if (recordingMode === 'meeting' && sizeMB >= 18) {
|
||||
// CHUNKING PATH for large meetings
|
||||
setStatus(`Large file (${sizeMB.toFixed(1)}MB). Splitting into chunks...`);
|
||||
const chunks = await invoke<string[]>('chunk_audio', {
|
||||
filePath: mp3Path,
|
||||
chunkMinutes: 10
|
||||
});
|
||||
|
||||
addToast(`Processing ${chunks.length} chunks...`, 'info', 4000);
|
||||
|
||||
let allTranscriptions: string[] = [];
|
||||
|
||||
for (let i = 0; i < chunks.length; i++) {
|
||||
setStatus(`Transcribing chunk ${i + 1}/${chunks.length}...`);
|
||||
const chunkText = await invoke<string>('transcribe_audio', {
|
||||
filePath: chunks[i],
|
||||
apiKey,
|
||||
productId
|
||||
});
|
||||
allTranscriptions.push(chunkText);
|
||||
}
|
||||
|
||||
// Merge transcriptions
|
||||
transText = allTranscriptions.join('\n\n--- Next Segment ---\n\n');
|
||||
addToast('All chunks transcribed successfully!', 'success', 3000);
|
||||
} else {
|
||||
// NORMAL PATH for small files
|
||||
setStatus('Transcribing (Infomaniak Whisper)...');
|
||||
transText = await invoke<string>('transcribe_audio', {
|
||||
filePath: mp3Path,
|
||||
apiKey,
|
||||
productId
|
||||
});
|
||||
}
|
||||
|
||||
setTranscription(transText);
|
||||
|
||||
// Check if transcription is empty or just whitespace
|
||||
@@ -357,19 +472,21 @@ const Recorder: React.FC<RecorderProps> = ({
|
||||
}
|
||||
};
|
||||
|
||||
|
||||
|
||||
return (
|
||||
<div className="flex flex-col w-full h-full bg-background relative">
|
||||
{/* Fixed Header - Reduced padding */}
|
||||
<div className="w-full flex justify-center items-center p-4 shrink-0">
|
||||
<img src={logo} alt="Logo" className="h-10 object-contain" />
|
||||
<img src={localStorage.getItem('customLogo') || logo} alt="Logo" className="h-10 object-contain" />
|
||||
</div>
|
||||
|
||||
{/* Scrollable Content - Reduced spacing */}
|
||||
<div className="flex-1 overflow-y-auto px-6 pb-6 flex flex-col items-center">
|
||||
<div className="mb-4 relative shrink-0">
|
||||
<div className={`w-24 h-24 rounded-full flex items-center justify-center transition-all duration-300 ${isRecording ? (isPaused ? 'bg-yellow-500/10' : 'bg-red-500/10 animate-pulse') : 'bg-secondary'}`}>
|
||||
<div className={`w-24 h-24 rounded-full flex items-center justify-center transition-all duration-300 ${isRecording ? (isWaiting ? 'bg-blue-500/20' : isPaused ? 'bg-yellow-500/10' : 'bg-red-500/10 animate-pulse') : 'bg-secondary'}`}>
|
||||
{isRecording ? (
|
||||
<div className={`w-16 h-16 rounded-full flex items-center justify-center shadow-[0_0_20px_rgba(239,68,68,0.5)] ${isPaused ? 'bg-yellow-500' : 'bg-red-500'}`}>
|
||||
<div className={`w-16 h-16 rounded-full flex items-center justify-center shadow-[0_0_20px_rgba(239,68,68,0.5)] ${isWaiting ? 'bg-blue-500 animate-pulse' : isPaused ? 'bg-yellow-500' : 'bg-red-500'}`}>
|
||||
<Mic size={32} className="text-white animate-bounce" />
|
||||
</div>
|
||||
) : (
|
||||
@@ -381,12 +498,12 @@ const Recorder: React.FC<RecorderProps> = ({
|
||||
</div>
|
||||
|
||||
<h1 className="text-xl font-bold mb-1 text-foreground">
|
||||
{isRecording ? (isPaused ? 'Paused' : 'Listening...') : 'Ready to Record'}
|
||||
{isRecording ? (isWaiting ? 'Waiting for Audio...' : isPaused ? 'Paused' : 'Listening...') : 'Ready to Record'}
|
||||
</h1>
|
||||
|
||||
<p className="text-muted-foreground mb-4 text-center text-xs h-5">
|
||||
{status}
|
||||
{isRecording && !isPaused && silenceDuration > 10 && (
|
||||
{isRecording && !isPaused && !isWaiting && silenceDuration > 10 && (
|
||||
<span className="block text-xs text-yellow-500 mt-0.5 opacity-80">
|
||||
Silence detected: {Math.floor(silenceDuration)}s
|
||||
</span>
|
||||
@@ -395,15 +512,30 @@ const Recorder: React.FC<RecorderProps> = ({
|
||||
|
||||
<div className="w-full max-w-sm space-y-3 mb-4 shrink-0">
|
||||
{!isRecording ? (
|
||||
<>
|
||||
<button
|
||||
onClick={() => startRecording()}
|
||||
disabled={!apiKey || !productId}
|
||||
className="w-full py-3 text-base font-semibold bg-primary text-primary-foreground rounded-lg hover:bg-primary/90 disabled:opacity-50 disabled:cursor-not-allowed transition-all shadow-md hover:shadow-lg"
|
||||
>
|
||||
{!apiKey ? 'Configure API Key First' : 'Start Recording'}
|
||||
{!apiKey ? 'Configure API Key First' : (autoStartEnabled ? 'Standby (Auto-Start)' : 'Start Recording')}
|
||||
</button>
|
||||
<div className="flex items-center justify-center gap-2 mt-2">
|
||||
<label className="flex items-center gap-2 cursor-pointer select-none">
|
||||
<input
|
||||
type="checkbox"
|
||||
checked={autoStartEnabled}
|
||||
onChange={(e) => setAutoStartEnabled(e.target.checked)}
|
||||
className="w-4 h-4 accent-primary rounded cursor-pointer"
|
||||
/>
|
||||
<span className="text-xs text-muted-foreground font-medium">Auto-start when audio detected</span>
|
||||
</label>
|
||||
</div>
|
||||
</>
|
||||
) : (
|
||||
<div className="flex gap-2 w-full">
|
||||
{/* In Waiting mode, we can only Stop (Cancel) */}
|
||||
{!isWaiting && (
|
||||
<button
|
||||
onClick={togglePause}
|
||||
className={`flex-1 py-4 text-lg font-semibold rounded-lg transition-all shadow-md hover:shadow-lg flex items-center justify-center gap-2 ${isPaused
|
||||
@@ -413,12 +545,13 @@ const Recorder: React.FC<RecorderProps> = ({
|
||||
>
|
||||
{isPaused ? 'Resume' : 'Pause'}
|
||||
</button>
|
||||
)}
|
||||
<button
|
||||
onClick={stopRecording}
|
||||
className="flex-1 py-4 text-lg font-semibold bg-destructive text-destructive-foreground rounded-lg hover:bg-destructive/90 transition-all shadow-md hover:shadow-lg flex items-center justify-center gap-2"
|
||||
>
|
||||
<Square size={20} fill="currentColor" />
|
||||
Stop
|
||||
{isWaiting ? 'Cancel' : 'Stop'}
|
||||
</button>
|
||||
</div>
|
||||
)}
|
||||
|
||||
@@ -5,6 +5,7 @@ import { save, open } from '@tauri-apps/plugin-dialog';
|
||||
import { invoke } from '@tauri-apps/api/core';
|
||||
import { encryptData, decryptData } from '../utils/backup';
|
||||
import EmailTemplateEditor from './EmailTemplateEditor';
|
||||
import logo from '../assets/logo.png';
|
||||
|
||||
import { PromptTemplate, EmailTemplate } from '../App';
|
||||
|
||||
@@ -382,6 +383,61 @@ const Settings: React.FC<SettingsProps> = ({ apiKey, productId, prompts, savePat
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div className="space-y-4">
|
||||
<h3 className="text-foreground font-semibold border-b border-border pb-2">📸 Branding</h3>
|
||||
<div className="p-4 bg-secondary/20 rounded border border-border/50">
|
||||
<div className="mb-3">
|
||||
<div className="font-medium text-sm mb-2">Custom Logo</div>
|
||||
<div className="text-xs text-muted-foreground mb-3">Upload your company logo to replace the default Livtec branding throughout the app.</div>
|
||||
</div>
|
||||
|
||||
{/* Logo Preview */}
|
||||
<div className="flex items-center gap-4 mb-3">
|
||||
<div className="w-20 h-20 bg-background border border-border rounded flex items-center justify-center overflow-hidden">
|
||||
<img
|
||||
src={localStorage.getItem('customLogo') || logo}
|
||||
alt="Logo Preview"
|
||||
className="max-w-full max-h-full object-contain"
|
||||
/>
|
||||
</div>
|
||||
<div className="flex-1">
|
||||
<button
|
||||
onClick={async () => {
|
||||
try {
|
||||
const selected = await open({
|
||||
filters: [{ name: 'Images', extensions: ['png', 'jpg', 'jpeg', 'svg'] }]
|
||||
});
|
||||
if (selected && typeof selected === 'string') {
|
||||
const dataUrl = await invoke<string>('read_image_as_base64', { filePath: selected });
|
||||
localStorage.setItem('customLogo', dataUrl);
|
||||
setStatusIdx('Logo uploaded! Save settings to apply.');
|
||||
// Force re-render
|
||||
window.dispatchEvent(new Event('storage'));
|
||||
}
|
||||
} catch (e) {
|
||||
setStatusIdx(`Logo upload failed: ${e}`);
|
||||
}
|
||||
}}
|
||||
className="bg-secondary hover:bg-secondary/80 text-xs px-3 py-2 rounded border border-border transition-all flex items-center gap-2"
|
||||
>
|
||||
<Upload size={14} /> Upload Logo
|
||||
</button>
|
||||
<button
|
||||
onClick={() => {
|
||||
localStorage.removeItem('customLogo');
|
||||
setStatusIdx('Logo reset to default. Save to apply.');
|
||||
window.dispatchEvent(new Event('storage'));
|
||||
}}
|
||||
className="mt-2 bg-secondary hover:bg-secondary/80 text-xs px-3 py-2 rounded border border-border transition-all text-muted-foreground"
|
||||
>
|
||||
Reset to Default
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
<p className="text-[10px] text-muted-foreground">Supported: PNG, JPG, SVG. Recommended: Square format, transparent background.</p>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div className="space-y-4">
|
||||
<h3 className="text-foreground font-semibold border-b border-border pb-2">System Intergration</h3>
|
||||
<div className="flex items-center justify-between p-4 bg-secondary/20 rounded border border-border/50">
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
import React from 'react';
|
||||
import { Mic, FileText, Calendar } from 'lucide-react';
|
||||
import { Mic, FileText, Calendar, Upload } from 'lucide-react';
|
||||
|
||||
interface TabsProps {
|
||||
currentTab: 'recorder' | 'transcription' | 'settings' | 'meetings' | 'history';
|
||||
onTabChange: (tab: 'recorder' | 'transcription' | 'settings' | 'meetings' | 'history') => void;
|
||||
currentTab: 'recorder' | 'transcription' | 'settings' | 'meetings' | 'history' | 'import';
|
||||
onTabChange: (tab: 'recorder' | 'transcription' | 'settings' | 'meetings' | 'history' | 'import') => void;
|
||||
}
|
||||
|
||||
const Tabs: React.FC<TabsProps> = ({ currentTab, onTabChange }) => {
|
||||
@@ -16,6 +16,13 @@ const Tabs: React.FC<TabsProps> = ({ currentTab, onTabChange }) => {
|
||||
<Mic size={16} />
|
||||
Recording
|
||||
</button>
|
||||
<button
|
||||
onClick={() => onTabChange('import')}
|
||||
className={`flex items-center gap-2 px-4 py-2 rounded-lg text-sm font-medium transition-colors ${currentTab === 'import' ? 'bg-secondary text-foreground' : 'text-muted-foreground hover:text-foreground hover:bg-secondary/50'}`}
|
||||
>
|
||||
<Upload size={16} />
|
||||
Import
|
||||
</button>
|
||||
<button
|
||||
onClick={() => onTabChange('transcription')}
|
||||
className={`flex items-center gap-2 px-4 py-2 rounded-lg text-sm font-medium transition-colors ${currentTab === 'transcription' ? 'bg-secondary text-foreground' : 'text-muted-foreground hover:text-foreground hover:bg-secondary/50'}`}
|
||||
|
||||
Reference in New Issue
Block a user