← Back to The Index Final Review Submission #11
Final Review

Review Nº 11

Trust Calibration of a Software AI Developer in Autonomous Urban Drone Navigation System
AuthorsAdriano Danni, Riccardo La Leggia, Mariafrancesca Betta, Tea Peyron, Giulia Ingino, Raffaele Pansa, Hasti Naderi, Giovanni Salafia, Giovanni Caroni and Cristian Preutesi
21/30
Score
The revision delivers meaningful improvements (a populated reference list, a proper coding scheme table, a formal conclusion, and a slightly broader research question), but the framework remains UAV-centric, and a new source-count inconsistency (42 vs. 53) has been introduced.

The Pros

6 Items
+
All "Authors TBC" placeholder references have been replaced with fully cited entries, restoring evidentiary credibility.
+
A complete coding scheme table now appears in the appendix, mapping extracted factors → open codes → themes → key sources.
+
A formal Section 6 Conclusion has been added, providing a clear closing synthesis.
+
The PRISMA-ScR flow diagram makes the screening process replicable and visible.
+
The research question and conclusion language have been broadened toward "technical professional," nudging the framing beyond developers.
+
Some inline citations (e.g., [E20], [E10], [T24]) now anchor specific claims to specific sources.

The Cons

6 Items
The framework is still built around UAV/urban drone development; the title, the examples in Sections 3.1–3.6, and the use case are domain-specific, so the Type 3 generality requirement is still not satisfied.
A new numerical inconsistency has been introduced: 42 sources are claimed in the method and conclusion, while the parenthetical breakdown and PRISMA diagram both show 53.
No diagram or summary table visualizes the six-theme framework itself, so relationships between layers remain invisible at a glance.
Section 3.6 still overlaps substantially with Section 5 (Gaps), creating redundancy and weakening the framework's conceptual unity.
Many subsections still rely on "Sources: …" blocks at the end rather than inline citation per claim, so the link between specific assertions and specific evidence is often unclear.
The stakeholder perspective remains narrowly tied to the developer; broader technical professionals are invoked rhetorically but not analyzed.

Suggested Changes

12 Pointers
01
High
Location
Title (page 1)
Issue
Title still names "Software AI Developer" and "Autonomous Urban Drone Navigation System," signaling a domain-specific scope inconsistent with the Type 3 generality rule
Suggested Fix
Rewrite as a generic title about technical-professional trust calibration in safety-critical AI, and demote the drone framing to the use case subtitle
02
High
Location
Section 2, Method (page 1)
Issue
Source counts conflict: text says "42 sources were included" but the parenthetical (20 + 8 + 25) and PRISMA figure both yield 53; reference list contains 42 entries
Suggested Fix
Reconcile into a single authoritative number — verify how many sources were actually retained, then align Method text, PRISMA figure (n = …), reference list, and Section 6 Conclusion accordingly
03
High
Location
Top of paper (between authors and Section 1)
Issue
No abstract in the document body, despite reviewers flagging this on the first submission
Suggested Fix
Add a 150–200 word abstract covering motivation, method, the six-theme framework, the use case takeaway, and the identified gap
04
High
Location
Section 3 (entire framework, pages 2–3)
Issue
Each theme is illustrated almost exclusively through UAV/drone or coding-assistant evidence, so the framework still does not read as cross-domain
Suggested Fix
For each theme, add at least one supporting example from a non-UAV technical context (e.g., medical imaging engineer, industrial control software, financial model validation) and remove drone-specific phrasing from theme headers
05
High
Location
Section 3.6, "Institutional Signals and Standards of Practice" (page 3)
Issue
Content still reads partly as a gap analysis and overlaps with Section 5, making the framework's sixth pillar weaker than the others
Suggested Fix
Rewrite 3.6 as a synthesis of what institutional mechanisms (NIST RMF, ISO 5338, audit trails) do and do not contribute to calibration; move all "this area lacks evidence" language exclusively to Section 5
06
Medium
Location
End of Section 3 (page 3)
Issue
The six themes are presented sequentially but their interactions (e.g., how 3.2 amplifies 3.4, or how 3.6 mitigates 3.1) are never made visible
Suggested Fix
Add a one-page summary diagram or matrix showing the six themes and the directional relationships between them, plus a short caption naming the dominant interaction loops
07
Medium
Location
Sections 3.1–3.6, "Sources: …" blocks
Issue
Most claims in body paragraphs are not individually cited; instead a block of source codes appears at the end of each subsection
Suggested Fix
Convert end-of-section source dumps into inline citations attached to the specific claim each source supports, so a reader can trace any sentence to its evidence
08
High
Location
Section 4.1, mapping of categories to themes (page 4)
Issue
The numbering used in the use case ("Category 3.4 = Risk Perception", "Theme 3.5 = Accountability", "Topic 3.6 = Trust Measurement") does not match the actual section titles in Section 3
Suggested Fix
Re-align the use case labels to the exact theme numbers and titles used in Section 3 to avoid confusing the reader
09
Medium
Location
Section 4 (Use Case, pages 3–4)
Issue
The case is engaging but reads as a checklist of categories; the narrative argument that the developer is "pathologically oscillating" between overtrust and undertrust is buried
Suggested Fix
Restructure 4.1 as continuous prose around the oscillation argument, then keep 4.2 (Architectural Solutions) as the structured corrective section
10
Medium
Location
Section 1, Introduction (page 1)
Issue
Stakeholder perspective is still framed almost exclusively around the AI developer, which several reviewers flagged as too narrow
Suggested Fix
Add a paragraph clarifying that the developer is one instance of a broader class of "technical workers who design, validate, or supervise AI" and explicitly position the framework as applicable to that class
11
Low
Location
References list (pages 4–6)
Issue
Numbering has gaps (P1, P5 missing; T2, T7, T15, T19, T20, T22 missing) without explanation, which can read as deleted "Authors TBC" entries rather than intentional pruning
Suggested Fix
Either renumber consecutively, or add a one-line note in the appendix explaining that gaps reflect sources removed during full-text review
12
Medium
Location
Appendix, AI Use Disclosure (page 8)
Issue
Disclosure says summaries were "manually checked" but does not state explicitly that every cited source was read in full by a team member, which Reviews 4 and 9 specifically asked for
Suggested Fix
Add an explicit sentence: "Every source cited in this review was read in full by at least one team member and human-verified before inclusion."
Back to The Index
Score · 21/30
Heard · Now · Fix it
Pros / Cons / Pointers
Final Review · Submission #11 The Index Grandi Sfide · 2026